perm filename COMMON.1[COM,LSP] blob sn#864787 filedate 1982-10-01 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00656 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00089 00002	∂30-Dec-81  1117	Guy.Steele at CMU-10A 	Text-file versions of DECISIONS and REVISIONS documents  
C00091 00003	∂23-Dec-81  2255	Kim.fateman at Berkeley 	elementary functions
C00094 00004	∂01-Jan-82  1600	Guy.Steele at CMU-10A 	Tasks: A Reminder and Plea 
C00098 00005	∂08-Dec-81  0650	Griss at UTAH-20 (Martin.Griss) 	PSL progress report   
C00107 00006	∂15-Dec-81  0829	Guy.Steele at CMU-10A 	Arrgghhh blag    
C00109 00007	∂18-Dec-81  0918	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	information about Common Lisp implementation  
C00113 00008	∂21-Dec-81  0702	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Extended-addressing Common Lisp 
C00115 00009	∂21-Dec-81  1101	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Common Lisp      
C00116 00010	∂21-Dec-81  1512	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Common Lisp
C00119 00011	∂22-Dec-81  0811	Kim.fateman at Berkeley 	various: arithmetic  commonlisp broadcasts  
C00122 00012	∂22-Dec-81  0847	Griss at UTAH-20 (Martin.Griss) 	[Griss (Martin.Griss): Re: Common Lisp]   
C00126 00013	∂23-Dec-81 1306	Guy.Steele at CMU-10A 	Re: various: arithmetic commonlisp broadcasts 
C00134 00014	∂18-Dec-81  1533	Jon L. White <JONL at MIT-XX> 	Extended-addressing Common Lisp   
C00135 00015	∂21-Dec-81  0717	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Common Lisp      
C00137 00016	∂22-Dec-81  0827	Griss at UTAH-20 (Martin.Griss) 	Re: various: arithmetic  commonlisp broadcasts
C00139 00017	∂04-Jan-82  1754	Kim.fateman at Berkeley 	numbers in common lisp   
C00148 00018	∂15-Jan-82  0850	Scott.Fahlman at CMU-10A 	Multiple Values    
C00154 00019	∂15-Jan-82  0913	George J. Carrette <GJC at MIT-MC> 	multiple values.   
C00156 00020	∂15-Jan-82  2352	David A. Moon <Moon at MIT-MC> 	Multiple Values   
C00158 00021	∂16-Jan-82  0631	Scott.Fahlman at CMU-10A 	Re: Multiple Values
C00160 00022	∂16-Jan-82  0737	Daniel L. Weinreb <DLW at MIT-AI> 	Multiple Values
C00163 00023	∂16-Jan-82  1415	Richard M. Stallman <RMS at MIT-AI> 	Multiple Values   
C00166 00024	∂16-Jan-82  2033	Scott.Fahlman at CMU-10A 	Keyword sequence fns    
C00167 00025	∂17-Jan-82  1756	Guy.Steele at CMU-10A 	Sequence functions    
C00170 00026	∂17-Jan-82  2207	Earl A. Killian <EAK at MIT-MC> 	Sequence functions    
C00172 00027	∂18-Jan-82  0235	Richard M. Stallman <RMS at MIT-AI> 	subseq and consing
C00174 00028	∂18-Jan-82  0822	Don Morrison <Morrison at UTAH-20> 	Re: subseq and consing  
C00175 00029	∂02-Jan-82  0908	Griss at UTAH-20 (Martin.Griss) 	Com L  
C00178 00030	∂14-Jan-82  0732	Griss at UTAH-20 (Martin.Griss) 	Common LISP 
C00179 00031	∂14-Jan-82  2032	Jonathan A. Rees <JAR at MIT-MC>   
C00182 00032	∂15-Jan-82  0109	RPG   	Rutgers lisp development project 
C00195 00033	∂15-Jan-82  0850	Scott.Fahlman at CMU-10A 	Multiple Values    
C00201 00034	∂15-Jan-82  0913	George J. Carrette <GJC at MIT-MC> 	multiple values.   
C00203 00035	∂15-Jan-82  2352	David A. Moon <Moon at MIT-MC> 	Multiple Values   
C00205 00036	∂16-Jan-82  0631	Scott.Fahlman at CMU-10A 	Re: Multiple Values
C00207 00037	∂16-Jan-82  0737	Daniel L. Weinreb <DLW at MIT-AI> 	Multiple Values
C00210 00038	∂16-Jan-82  1252	Griss at UTAH-20 (Martin.Griss) 	Kernel for Commaon LISP    
C00212 00039	∂16-Jan-82  1415	Richard M. Stallman <RMS at MIT-AI> 	Multiple Values   
C00215 00040	∂16-Jan-82  2033	Scott.Fahlman at CMU-10A 	Keyword sequence fns    
C00216 00041	∂17-Jan-82  0618	Griss at UTAH-20 (Martin.Griss) 	Agenda 
C00219 00042	∂17-Jan-82  1751	Feigenbaum at SUMEX-AIM 	more on Interlisp-VAX    
C00225 00043	∂17-Jan-82  1756	Guy.Steele at CMU-10A 	Sequence functions    
C00228 00044	∂17-Jan-82  2042	Earl A. Killian <EAK at MIT-MC> 	Sequence functions    
C00230 00045	∂18-Jan-82  0235	Richard M. Stallman <RMS at MIT-AI> 	subseq and consing
C00232 00046	∂18-Jan-82  0822	Don Morrison <Morrison at UTAH-20> 	Re: subseq and consing  
C00233 00047	∂18-Jan-82  1602	Daniel L. Weinreb <DLW at MIT-AI> 	subseq and consing  
C00234 00048	∂18-Jan-82  2203	Scott.Fahlman at CMU-10A 	Re: Sequence functions  
C00237 00049	∂19-Jan-82  1551	RPG  	Suggestion    
C00239 00050	∂19-Jan-82  2113	Griss at UTAH-20 (Martin.Griss) 	Re: Suggestion        
C00241 00051	∂20-Jan-82  1604	David A. Moon <MOON5 at MIT-AI> 	Keyword style sequence functions
C00258 00052	∂20-Jan-82  1631	Kim.fateman at Berkeley 	numerics and common-lisp 
C00267 00053	∂20-Jan-82  2008	Daniel L. Weinreb <dlw at MIT-AI> 	Suggestion     
C00269 00054	∂20-Jan-82  2234	Kim.fateman at Berkeley 	adding to kernel    
C00271 00055	∂18-Jan-82  1537	Daniel L. Weinreb <DLW at MIT-AI> 	subseq and consing  
C00272 00056	∂18-Jan-82  2203	Scott.Fahlman at CMU-10A 	Re: Sequence functions  
C00275 00057	∂19-Jan-82  1551	RPG  	Suggestion    
C00278 00058	∂19-Jan-82  2113	Griss at UTAH-20 (Martin.Griss) 	Re: Suggestion        
C00280 00059	∂19-Jan-82  2113	Fahlman at CMU-20C 	Re: Suggestion      
C00282 00060	∂20-Jan-82  1604	David A. Moon <MOON5 at MIT-AI> 	Keyword style sequence functions
C00299 00061	∂20-Jan-82  1631	Kim.fateman at Berkeley 	numerics and common-lisp 
C00308 00062	∂20-Jan-82  2008	Daniel L. Weinreb <dlw at MIT-AI> 	Suggestion     
C00310 00063	∂19-Jan-82  1448	Feigenbaum at SUMEX-AIM 	more on common lisp 
C00318 00064	∂20-Jan-82  2132	Fahlman at CMU-20C 	Implementations
C00326 00065	∂20-Jan-82  2234	Kim.fateman at Berkeley 	adding to kernel    
C00329 00066	∂21-Jan-82  1746	Earl A. Killian <EAK at MIT-MC> 	SET functions    
C00330 00067	∂21-Jan-82  1803	Richard M. Stallman <RMS at MIT-AI>
C00332 00068	∂21-Jan-82  1844	Don Morrison <Morrison at UTAH-20> 
C00335 00069	∂21-Jan-82  2053	George J. Carrette <GJC at MIT-MC> 
C00338 00070	∂21-Jan-82  1144	Sridharan at RUTGERS (Sri) 	S-1 CommonLisp   
C00349 00071	∂21-Jan-82  1651	Earl A. Killian <EAK at MIT-MC> 	SET functions    
C00350 00072	∂21-Jan-82  1803	Richard M. Stallman <RMS at MIT-AI>
C00352 00073	∂21-Jan-82  1844	Don Morrison <Morrison at UTAH-20> 
C00355 00074	∂21-Jan-82  2053	George J. Carrette <GJC at MIT-MC> 
C00357 00075	∂22-Jan-82  1842	Fahlman at CMU-20C 	Re: adding to kernel
C00361 00076	∂22-Jan-82  1914	Fahlman at CMU-20C 	Multiple values
C00363 00077	∂22-Jan-82  2132	Kim.fateman at Berkeley 	Re: adding to kernel
C00367 00078	∂23-Jan-82  0409	George J. Carrette <GJC at MIT-MC> 	adding to kernel   
C00371 00079	∂23-Jan-82  0910	RPG  
C00373 00080	∂23-Jan-82  1841	Fahlman at CMU-20C  
C00376 00081	∂23-Jan-82  2029	Fahlman at CMU-20C 	Re:  adding to kernel    
C00382 00082	∂24-Jan-82  0127	Richard M. Stallman <RMS at MIT-AI>
C00383 00083	∂24-Jan-82  0306	Richard M. Stallman <RMS at MIT-AI>
C00385 00084	∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
C00387 00085	∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
C00389 00086	∂24-Jan-82  2008	George J. Carrette <GJC at MIT-MC> 	adding to kernel   
C00393 00087	∂24-Jan-82  2227	Fahlman at CMU-20C 	Sequences 
C00395 00088	∂24-Jan-82  2246	Kim.fateman at Berkeley 	NIL/Macsyma    
C00397 00089	∂25-Jan-82  1558	DILL at CMU-20C 	eql => eq?   
C00400 00090	∂25-Jan-82  1853	Fahlman at CMU-20C 	Re: eql => eq? 
C00401 00091	∂27-Jan-82  1034	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: eql => eq?  
C00404 00092	∂27-Jan-82  1445	Jon L White <JONL at MIT-MC> 	Multiple mailing lists?  
C00405 00093	∂27-Jan-82  1438	Jon L White <JONL at MIT-MC> 	Two little suggestions for macroexpansion    
C00411 00094	∂27-Jan-82  2202	RPG  	MVLet    
C00414 00095	∂28-Jan-82  0901	Daniel L. Weinreb <dlw at MIT-AI> 	MVLet     
C00416 00096	∂24-Jan-82  0127	Richard M. Stallman <RMS at MIT-AI>
C00417 00097	∂24-Jan-82  0306	Richard M. Stallman <RMS at MIT-AI>
C00419 00098	∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
C00421 00099	∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
C00423 00100	∂24-Jan-82  2008	George J. Carrette <GJC at MIT-MC> 	adding to kernel   
C00427 00101	∂24-Jan-82  2227	Fahlman at CMU-20C 	Sequences 
C00429 00102	∂24-Jan-82  2246	Kim.fateman at Berkeley 	NIL/Macsyma    
C00431 00103	∂25-Jan-82  1436	Hanson at SRI-AI 	NIL and DEC VAX Common LISP
C00433 00104	∂25-Jan-82  1558	DILL at CMU-20C 	eql => eq?   
C00436 00105	∂25-Jan-82  1853	Fahlman at CMU-20C 	Re: eql => eq? 
C00437 00106	∂28-Jan-82  0901	Daniel L. Weinreb <dlw at MIT-AI> 	MVLet     
C00439 00107	∂28-Jan-82  1235	Fahlman at CMU-20C 	Re: MVLet      
C00444 00108	∂28-Jan-82  1416	Richard M. Stallman <rms at MIT-AI> 	Macro expansion suggestions 
C00446 00109	∂28-Jan-82  1914	Howard I. Cannon <HIC at MIT-MC> 	Macro expansion suggestions    
C00451 00110	∂27-Jan-82  1633	Jonl at MIT-MC Two little suggestions for macroexpansion
C00457 00111	∂28-Jan-82  1633	Fahlman at CMU-20C 	Re: Two little suggestions for macroexpansion
C00459 00112	∂29-Jan-82  0945	DILL at CMU-20C 	Re: eql => eq?    
C00463 00113	∂29-Jan-82  1026	Guy.Steele at CMU-10A 	Okay, you hackers
C00465 00114	∂29-Jan-82  1059	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: eql => eq?  
C00469 00115	∂29-Jan-82  1146	Guy.Steele at CMU-10A 	MACSYMA timing   
C00471 00116	∂29-Jan-82  1204	Guy.Steele at CMU-10A 	Re: eql => eq?   
C00473 00117	∂29-Jan-82  1225	George J. Carrette <GJC at MIT-MC> 	MACSYMA timing
C00476 00118	∂29-Jan-82  1324	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re:  Re: eql => eq?  
C00477 00119	∂29-Jan-82  1332	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re:  Re: eql => eq?  
C00478 00120	∂29-Jan-82  1336	Guy.Steele at CMU-10A 	Re: Re: eql => eq?    
C00480 00121	∂29-Jan-82  1654	Richard M. Stallman <RMS at MIT-AI> 	Trying to implement FPOSITION with LAMBDA-MACROs.    
C00483 00122	∂29-Jan-82  2149	Kim.fateman at Berkeley 	Okay, you hackers   
C00486 00123	∂29-Jan-82  2235	HIC at SCRC-TENEX 	Trying to implement FPOSITION with LAMBDA-MACROs.  
C00490 00124	∂30-Jan-82  0006	MOON at SCRC-TENEX 	Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs 
C00491 00125	∂30-Jan-82  0431	Kent M. Pitman <KMP at MIT-MC> 	Those two little suggestions for macroexpansion 
C00493 00126	∂30-Jan-82  1234	Eric Benson <BENSON at UTAH-20> 	Re: MVLet   
C00496 00127	∂30-Jan-82  1351	RPG  	MVlet    
C00497 00128	∂30-Jan-82  1405	Jon L White <JONL at MIT-MC> 	Comparison of "lambda-macros" and my "Two little suggestions ..."
C00505 00129	∂30-Jan-82  1446	Jon L White <JONL at MIT-MC> 	The format ((MACRO . f) ...)  
C00507 00130	∂30-Jan-82  1742	Fahlman at CMU-20C 	Re: MVlet      
C00509 00131	∂30-Jan-82  1807	RPG  	MVlet    
C00511 00132	∂30-Jan-82  1935	Guy.Steele at CMU-10A 	Forwarded message
C00514 00133	∂30-Jan-82  1952	Fahlman at CMU-20C 	Re: MVlet      
C00520 00134	∂30-Jan-82  2002	Fahlman at CMU-20C 	GETPR
C00522 00135	∂30-Jan-82  2201	Richard M. Stallman <RMS at MIT-AI>
C00523 00136	∂31-Jan-82  1116	Daniel L. Weinreb <dlw at MIT-AI> 	GETPR
C00524 00137	∂01-Feb-82  0752	Jon L White <JONL at MIT-MC> 	Incredible co-incidence about the format ((MACRO . f) ...)  
C00526 00138	∂01-Feb-82  0939	HIC at SCRC-TENEX 	Incredible co-incidence about the format ((MACRO . f) ...)   
C00530 00139	∂01-Feb-82  1014	Kim.fateman at Berkeley 	GETPR and compatibility  
C00535 00140	∂01-Feb-82  1034	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	a proposal about compatibility 
C00537 00141	∂01-Feb-82  1039	Daniel L. Weinreb <DLW at MIT-AI> 	Re: MVLet      
C00540 00142	∂01-Feb-82  2315	Earl A. Killian <EAK at MIT-MC> 	Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs   
C00542 00143	∂01-Feb-82  2315	FEINBERG at CMU-20C 	Compatibility With Maclisp   
C00545 00144	∂01-Feb-82  2319	Earl A. Killian <EAK at MIT-MC> 	GET/PUT names    
C00547 00145	∂01-Feb-82  2319	Howard I. Cannon <HIC at MIT-MC> 	The right way   
C00551 00146	∂01-Feb-82  2321	Jon L White <JONL at MIT-MC> 	MacLISP name compatibility, and return values of update functions
C00556 00147	∂01-Feb-82  2322	Jon L White <JONL at MIT-MC> 	MVLet hair, and RPG's suggestion   
C00560 00148	∂02-Feb-82  0002	Guy.Steele at CMU-10A 	The right way    
C00562 00149	∂02-Feb-82  0110	Richard M. Stallman <RMS at MIT-AI>
C00564 00150	∂02-Feb-82  0116	David A. Moon <Moon at SCRC-TENEX at MIT-AI> 	Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs
C00566 00151	∂02-Feb-82  1005	Daniel L. Weinreb <DLW at MIT-AI>  
C00568 00152	∂02-Feb-82  1211	Eric Benson <BENSON at UTAH-20> 	Re: MacLISP name compatibility, and return values of update functions   
C00570 00153	∂02-Feb-82  1304	FEINBERG at CMU-20C 	a proposal about compatibility    
C00571 00154	∂02-Feb-82  1321	Masinter at PARC-MAXC 	Re: MacLISP name compatibility, and return values of update functions   
C00572 00155	∂02-Feb-82  1337	Masinter at PARC-MAXC 	SUBST vs INLINE, consistent compilation   
C00575 00156	∂02-Feb-82  1417	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: a proposal about compatibility  
C00580 00157	∂02-Feb-82  1539	Richard M. Stallman <RMS at MIT-AI> 	No policy is a good policy  
C00584 00158	∂02-Feb-82  1926	DILL at CMU-20C 	upward compatibility   
C00586 00159	∂02-Feb-82  2148	RPG  	MVLet    
C00587 00160	∂02-Feb-82  2223	Richard M. Stallman <RMS at MIT-AI>
C00589 00161	∂02-Feb-82  2337	David A. Moon <MOON at MIT-MC> 	upward compatibility   
C00591 00162	∂03-Feb-82  1622	Earl A. Killian <EAK at MIT-MC> 	SUBST vs INLINE, consistent compilation   
C00592 00163	∂04-Feb-82  1513	Jon L White <JONL at MIT-MC> 	"exceptions" possibly based on misconception and EVAL strikes again  
C00597 00164	∂04-Feb-82  2047	Howard I. Cannon <HIC at MIT-MC> 	"exceptions" possibly based on misconception and EVAL strikes again   
C00599 00165	∂05-Feb-82  0022	Earl A. Killian <EAK at MIT-MC> 	SUBST vs INLINE, consistent compilation   
C00600 00166	∂05-Feb-82  2247	Fahlman at CMU-20C 	Maclisp compatibility    
C00603 00167	∂06-Feb-82  1200	Daniel L. Weinreb <dlw at MIT-AI> 	Maclisp compatibility    
C00605 00168	∂06-Feb-82  1212	Daniel L. Weinreb <dlw at MIT-AI> 	Return values of SETF    
C00607 00169	∂06-Feb-82  1232	Daniel L. Weinreb <dlw at MIT-AI> 	MVLet     
C00609 00170	∂06-Feb-82  1251	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Maclisp compatibility 
C00610 00171	∂06-Feb-82  1416	Eric Benson <BENSON at UTAH-20> 	Re: Maclisp compatibility  
C00612 00172	∂06-Feb-82  1429	Howard I. Cannon <HIC at MIT-MC> 	Return values of SETF
C00613 00173	∂06-Feb-82  2031	Fahlman at CMU-20C 	Value of SETF  
C00614 00174	∂06-Feb-82  2102	Fahlman at CMU-20C 	Re: MVLet      
C00616 00175	∂07-Feb-82  0129	Richard Greenblatt <RG at MIT-AI>  
C00618 00176	∂07-Feb-82  0851	Fahlman at CMU-20C  
C00620 00177	∂07-Feb-82  2234	David A. Moon <Moon at MIT-MC> 	Flags in property lists
C00621 00178	∂08-Feb-82  0749	Daniel L. Weinreb <DLW at MIT-MC> 	mv-call   
C00624 00179	∂08-Feb-82  0752	Daniel L. Weinreb <DLW at MIT-MC>  
C00626 00180	∂08-Feb-82  1256	Guy.Steele at CMU-10A 	Flat property lists   
C00627 00181	∂08-Feb-82  1304	Guy.Steele at CMU-10A 	The "Official" Rules  
C00629 00182	∂08-Feb-82  1410	Eric Benson <BENSON at UTAH-20> 	Re:  Flat property lists   
C00632 00183	∂08-Feb-82  1424	Don Morrison <Morrison at UTAH-20> 	Re:  Flat property lists
C00635 00184	∂08-Feb-82  1453	Richard M. Stallman <RMS at MIT-AI>
C00636 00185	∂19-Feb-82  1656	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Revised sequence proposal 
C00637 00186	∂20-Feb-82  1845	Scott.Fahlman at CMU-10A 	Revised sequence proposal    
C00638 00187	∂21-Feb-82  2357	MOON at SCRC-TENEX 	Fahlman's new new sequence proposal, and an issue of policy 
C00645 00188	∂22-Feb-82  0729	Griss at UTAH-20 (Martin.Griss)    
C00647 00189	∂08-Feb-82  1222	Hanson at SRI-AI 	common Lisp 
C00652 00190	∂28-Feb-82  1158	Scott E. Fahlman <FAHLMAN at CMU-20C> 	T and NIL  
C00666 00191	∂28-Feb-82  1342	Scott E. Fahlman <FAHLMAN at CMU-20C> 	T and NIL addendum   
C00668 00192	∂28-Feb-82  1524	George J. Carrette <GJC at MIT-MC> 	T and NIL.    
C00673 00193	∂28-Feb-82  1700	Kim.fateman at Berkeley 	smoking things out of macsyma 
C00676 00194	∂28-Feb-82  1803	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Re:  T and NIL. 
C00679 00195	∂28-Feb-82  2102	George J. Carrette <GJC at MIT-MC> 	T and NIL.    
C00681 00196	∂28-Feb-82  2333	George J. Carrette <GJC at MIT-MC> 	Take the hint.
C00683 00197	∂01-Mar-82  1356	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: T and NIL   
C00687 00198	∂01-Mar-82  2031	Richard M. Stallman <RMS at MIT-AI> 	Pronouncing ()    
C00689 00199	∂01-Mar-82  2124	Richard M. Stallman <RMS at MIT-AI> 	() and T.    
C00693 00200	∂02-Mar-82  1233	Jon L White <JONL at MIT-MC> 	NIL versus (), and more about predicates.    
C00699 00201	∂02-Mar-82  1322	Jon L White <JONL at MIT-MC> 	NOT and NULL: addendum to previous note 
C00700 00202	∂02-Mar-82  1322	George J. Carrette <GJC at MIT-MC> 	T and NIL.    
C00703 00203	∂02-Mar-82  1406	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	I think I am missing something 
C00707 00204	∂03-Mar-82  1158	Eric Benson <BENSON at UTAH-20> 	The truth value returned by predicates    
C00709 00205	∂03-Mar-82  1337	Eric Benson <BENSON at UTAH-20> 	The truth value returned by predicates    
C00711 00206	∂03-Mar-82  1753	Richard M. Stallman <RMS at MIT-AI>
C00713 00207	∂04-Mar-82  1846	Earl A. Killian <EAK at MIT-MC> 	T and NIL   
C00714 00208	∂04-Mar-82  1846	Earl A. Killian <EAK at MIT-MC> 	Fahlman's new new sequence proposal, and an issue of policy   
C00716 00209	∂05-Mar-82  0101	Richard M. Stallman <RMS at MIT-AI> 	COMPOSE 
C00717 00210	∂05-Mar-82  0902	Jon L White <JONL at MIT-MC> 	What are you missing?  and "patching"  ATOM and LISTP  
C00721 00211	∂05-Mar-82  0910	Jon L White <JONL at MIT-MC> 	How useful will a liberated T and NIL be?    
C00724 00212	∂05-Mar-82  1129	MASINTER at PARC-MAXC 	NIL and T   
C00727 00213	∂05-Mar-82  1308	Kim.fateman at Berkeley 	aesthetics, NIL and T    
C00729 00214	∂05-Mar-82  2045	George J. Carrette <GJC at MIT-MC> 	I won't die if (SYMBOLP (NOT 'FOO)) => T, but really now...
C00733 00215	∂05-Mar-82  2312	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Lexical Scoping 
C00735 00216	∂06-Mar-82  1218	Alan Bawden <ALAN at MIT-MC> 	What I still think about T and NIL 
C00737 00217	∂06-Mar-82  1251	Alan Bawden <ALAN at MIT-MC> 	What I still think about T and NIL 
C00739 00218	∂06-Mar-82  1326	Howard I. Cannon <HIC at MIT-MC> 	T/NIL 
C00740 00219	∂06-Mar-82  1351	Eric Benson <BENSON at UTAH-20> 	CAR of NIL  
C00741 00220	∂06-Mar-82  1429	KIM.jkf@Berkeley (John Foderaro) 	t and nil  
C00743 00221	∂06-Mar-82  1911	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: CAR of NIL  
C00747 00222	∂06-Mar-82  2306	JMC  
C00748 00223	∂06-Mar-82  2314	Eric Benson <BENSON at UTAH-20> 	Re: CAR of NIL   
C00750 00224	∂07-Mar-82  0923	Daniel L. Weinreb <dlw at MIT-AI> 	Re: CAR of NIL 
C00751 00225	∂07-Mar-82  1111	Eric Benson <BENSON at UTAH-20> 	Re: CAR of NIL   
C00752 00226	∂07-Mar-82  1609	FEINBERG at CMU-20C 	() vs NIL
C00754 00227	∂07-Mar-82  2121	Richard M. Stallman <RMS at MIT-AI>
C00755 00228	∂08-Mar-82  0835	Jon L White <JONL at MIT-MC> 	Divergence
C00760 00229	∂08-Mar-82  1904	<Guy.Steele at CMU-10A>  	There's a market out there...
C00762 00230	∂10-Mar-82  2021	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Vectors and Arrays   
C00772 00231	∂10-Mar-82  2129	Griss at UTAH-20 (Martin.Griss) 	Re: Vectors and Arrays
C00774 00232	∂10-Mar-82  2350	MOON at SCRC-TENEX 	Vectors and Arrays--briefly   
C00779 00233	∂11-Mar-82  1829	Richard M. Stallman <RMS at MIT-AI>
C00781 00234	∂12-Mar-82  0825	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Re: Vectors and Arrays    
C00786 00235	∂12-Mar-82  1035	MOON at SCRC-TENEX 	Re: Vectors and Arrays   
C00788 00236	∂14-Mar-82  1152	Symbolics Technical Staff 	The T and NIL issues   
C00792 00237	∂14-Mar-82  1334	Earl A. Killian <EAK at MIT-MC> 	The T and NIL issues  
C00795 00238	∂14-Mar-82  1816	Daniel L. Weinreb <dlw at MIT-AI> 	Re: Vectors and Arrays   
C00797 00239	∂14-Mar-82  1831	Jon L White <JONL at MIT-MC> 	The T and NIL issues (and etc.)    
C00801 00240	∂14-Mar-82  1947	George J. Carrette <GJC at MIT-MC> 	T and NIL
C00803 00241	∂14-Mar-82  2046	Jon L White <JONL at MIT-MC> 	Why Vectors? and taking a cue from SYSLISP   
C00811 00242	∂14-Mar-82  2141	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Re: Vectors and Arrays    
C00813 00243	∂17-Mar-82  1846	Kim.fateman at Berkeley 	arithmetic
C00816 00244	∂18-Mar-82  0936	Don Morrison <Morrison at UTAH-20> 	Re: arithmetic
C00818 00245	∂18-Mar-82  1137	MOON at SCRC-TENEX 	complex log    
C00820 00246	∂18-Mar-82  1432	CSVAX.fateman at Berkeley 	INF vs 1/0   
C00822 00247	∂24-Mar-82  2102	Guy.Steele at CMU-10A 	T and NIL   
C00829 00248	∂29-Mar-82  1037	Guy.Steele at CMU-10A 	NIL and ()  
C00832 00249	∂30-Mar-82  0109	George J. Carrette <GJC at MIT-MC> 	NIL and () in VAX NIL.  
C00833 00250	∂06-Apr-82  1337	The Technical Staff of Lawrence Livermore National Laboratory <CL at S1-A> 	T, NIL, ()    
C00835 00251	∂20-Apr-82  1457	RPG   via S1-A 	Test
C00836 00252	∂20-May-82  1316	FEINBERG at CMU-20C 	DOSTRING 
C00838 00253	∂02-Jun-82  1338	Guy.Steele at CMU-10A 	Keyword-style sequence functions
C00842 00254	∂04-Jun-82  0022	MOON at SCRC-TENEX 	Keyword-style sequence functions   
C00843 00255	∂04-Jun-82  0942	Guy.Steele at CMU-10A 	Bug in message about sequence fns    
C00845 00256	∂11-Jun-82  1933	Quux 	Proposed new FORMAT operator: ~U("units")   
C00847 00257	∂12-Jun-82  0819	Quux 	More on ~U (short) 
C00848 00258	∂18-Jun-82  1924	Guy.Steele at CMU-10A 	Suggested feature from EAK 
C00851 00259	∂18-Jun-82  2237	JonL at PARC-MAXC 	Re: Suggested feature from EAK 
C00858 00260	∂19-Jun-82  1230	David A. Moon <Moon at SCRC-TENEX at MIT-AI> 	Proposed new FORMAT operator: ~U("units")   
C00859 00261	∂02-Jul-82  1005	Guy.Steele at CMU-10A 	SIGNUM function  
C00864 00262	∂02-Jul-82  1738	MOON at SCRC-TENEX 	SIGN or SIGNUM 
C00865 00263	∂07-Jul-82  1339	Earl A. Killian            <Killian at MIT-MULTICS> 	combining sin and sind
C00867 00264	∂07-Jul-82  1406	Earl A. Killian            <Killian at MIT-MULTICS> 	user type names  
C00870 00265	∂07-Jul-82  1444	Earl A. Killian            <Killian at MIT-MULTICS> 	trunc  
C00872 00266	∂07-Jul-82  1753	Earl A. Killian <EAK at MIT-MC> 	combining sin and sind
C00873 00267	∂07-Jul-82  1945	Guy.Steele at CMU-10A 	Comment on HAULONG    
C00876 00268	∂07-Jul-82  1951	Guy.Steele at CMU-10A 	Re: trunc   
C00878 00269	∂07-Jul-82  2020	Scott E. Fahlman <Fahlman at Cmu-20c> 	Comment on HAULONG   
C00879 00270	∂08-Jul-82  1034	Guy.Steele at CMU-10A 	HAULONG
C00880 00271	∂08-Jul-82  1723	Earl A. Killian <EAK at MIT-MC> 	HAULONG
C00881 00272	∂08-Jul-82  1749	Kim.fateman at Berkeley 	Re:  HAULONG   
C00882 00273	∂09-Jul-82  1450	Guy.Steele at CMU-10A 	Meeting?    
C00884 00274	∂09-Jul-82  2047	Scott E. Fahlman <Fahlman at Cmu-20c> 	Meeting?   
C00885 00275	∂18-Jul-82  1413	Daniel L. Weinreb <DLW at MIT-AI> 	combining sin and sind   
C00887 00276	∂19-Jul-82  1249	Daniel L. Weinreb <DLW at MIT-AI> 	[REYNOLDS at RAND-AI: [Daniel L. Weinreb <DLW at MIT-AI>: combining sin and sind]]   
C00889 00277	∂19-Jul-82  1328	Earl A. Killian            <Killian at MIT-MULTICS> 	boole  
C00891 00278	∂19-Jul-82  1515	Guy.Steele at CMU-10A 	Re: boole   
C00892 00279	∂19-Jul-82  1951	Scott E. Fahlman <Fahlman at Cmu-20c> 	boole 
C00894 00280	∂20-Jul-82  1632	JonL at PARC-MAXC 	Re: boole  
C00896 00281	∂20-Jul-82  1711	Earl A. Killian <EAK at MIT-MC> 	boole  
C00897 00282	∂20-Jul-82  1737	JonL at PARC-MAXC 	Re: Comment on HAULONG    
C00900 00283	∂21-Jul-82  0759	JonL at PARC-MAXC 	Re: boole, and the still pending name problem.
C00902 00284	∂23-Jul-82  1435	Earl A. Killian <EAK at MIT-MC> 	boole, and the still pending name problem.
C00903 00285	∂23-Jul-82  1436	MOON at SCRC-TENEX 	boole
C00905 00286	∂23-Jul-82  2323	JonL at PARC-MAXC 	Re: boole, and the still pending name problem - Q & A   
C00907 00287	∂24-Jul-82  0118	Alan Bawden <ALAN at MIT-MC> 	Boole
C00910 00288	∂24-Jul-82  1437	Kim.fateman at Berkeley 	elementary functions
C00912 00289	∂25-Jul-82  2141	Guy.Steele at CMU-10A 	Re: elementary functions   
C00914 00290	∂26-Jul-82  0538	JonL at PARC-MAXC 	Re: Boole, and the value of pi 
C00917 00291	∂26-Jul-82  1117	Daniel L. Weinreb <dlw at MIT-AI> 	Re: Boole 
C00919 00292	∂04-Aug-82  1557	Kim.fateman at Berkeley 	comments on the new manual    
C00920 00293	∂04-Aug-82  1557	Kim.fateman at Berkeley  
C00921 00294	∂04-Aug-82  1656	David A. Moon <MOON at SCRC-TENEX at MIT-MC> 	trichotomy    
C00923 00295	∂04-Aug-82  1738	Kim.fateman at Berkeley 	Re:  trichotomy
C00924 00296	∂05-Aug-82  2210	Kim.fateman at Berkeley 	endp 
C00925 00297	∂08-Aug-82  1655	Scott E. Fahlman <Fahlman at Cmu-20c> 	Issues
C00945 00298	∂09-Aug-82  0111	MOON at SCRC-TENEX  
C00948 00299	∂09-Aug-82  2029	Scott E. Fahlman <Fahlman at Cmu-20c>   
C00954 00300	∂09-Aug-82  2220	Scott E. Fahlman <Fahlman at Cmu-20c> 	Arrays and Vectors   
C00960 00301	∂10-Aug-82  0003	MOON at SCRC-TENEX 	Your stream proposal
C00962 00302	∂10-Aug-82  0549	JonL at PARC-MAXC 	Need for "active" objects, and your STREAM proposal.    
C00968 00303	∂10-Aug-82  0826	Scott E. Fahlman <Fahlman at Cmu-20c> 	Function streams
C00971 00304	∂11-Aug-82  0641	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-ML> 	Function streams
C00983 00305	∂11-Aug-82  1914	Scott E. Fahlman <Fahlman at Cmu-20c> 	Function Streams
C00985 00306	∂12-Aug-82  1402	Guy.Steele at CMU-10A 	Common LISP Meeting, etc.  
C00987 00307	∂12-Aug-82  2002	Guy.Steele at CMU-10A 	Meeting - one more note    
C00988 00308	∂13-Aug-82  1251	Eric Benson <BENSON at UTAH-20> 	Notes on 29 July manual    
C00994 00309	∂23-Aug-82  1326	STEELE at CMU-20C 	Results of the 21 August 1982 Common LISP Meeting  
C01060 00310	∂23-Aug-82  2021	Earl A. Killian <EAK at MIT-MC> 	intern 
C01061 00311	∂23-Aug-82  2021	Earl A. Killian <EAK at MIT-MC> 	SET vs. SETF
C01062 00312	∂23-Aug-82  2021	Earl A. Killian <EAK at MIT-MC> 	byte specifiers  
C01064 00313	∂23-Aug-82  2021	Earl A. Killian <EAK at MIT-MC> 	lowercase in print    
C01065 00314	∂23-Aug-82  2029	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	Issue #106    
C01066 00315	∂23-Aug-82  2029	Earl A. Killian <EAK at MIT-MC> 	typep  
C01067 00316	∂23-Aug-82  2034	Guy.Steele at CMU-10A 	Re: byte specifiers   
C01068 00317	∂23-Aug-82  2137	Kent M. Pitman <KMP at MIT-MC> 	SET vs. SETF 
C01070 00318	∂24-Aug-82  0032	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	SET vs. SETF  
C01072 00319	∂24-Aug-82  0042	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	lowercase in print 
C01074 00320	∂24-Aug-82  0907	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	typep 
C01076 00321	∂24-Aug-82  1008	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	Results    
C01078 00322	∂24-Aug-82  1021	HEDRICK at RUTGERS (Mgr DEC-20s/Dir LCSR Comp Facility) 	a protest    
C01081 00323	∂24-Aug-82  1115	Jonathan Rees <Rees at YALE> 	Non-local GO's 
C01084 00324	∂24-Aug-82  1209	HEDRICK at RUTGERS (Mgr DEC-20s/Dir LCSR Comp Facility) 	Re: Non-local GO's
C01086 00325	∂24-Aug-82  1233	FEINBERG at CMU-20C 	Non-local GO's
C01088 00326	∂24-Aug-82  1304	FEINBERG at CMU-20C 	SET vs. SETF  
C01090 00327	∂24-Aug-82  1311	FEINBERG at CMU-20C 	SET vs. SETF  
C01091 00328	∂24-Aug-82  1432	Scott E. Fahlman <Fahlman at Cmu-20c> 	Issue #106 
C01092 00329	∂24-Aug-82  1435	Earl A. Killian <EAK at MIT-MC> 	point 122   
C01093 00330	∂24-Aug-82  1939	Earl A. Killian <EAK at MIT-MC> 	assert 
C01094 00331	∂25-Aug-82  0146	Robert W. Kerns <RWK at MIT-MC> 	SETF and friends 
C01096 00332	∂25-Aug-82  0957	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	SET vs. SETF  
C01098 00333	∂25-Aug-82  1103	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	Keyword arguments to LOAD    
C01101 00334	∂25-Aug-82  1123	Kim.jkf at Berkeley 	case sensitivity   
C01108 00335	∂25-Aug-82  1243	Alan Bawden <ALAN at MIT-MC> 	SET vs. SETF   
C01109 00336	∂25-Aug-82  1248	lseward at RAND-RELAY 	case sensitivity 
C01111 00337	∂25-Aug-82  1357	Earl A. Killian            <Killian at MIT-MULTICS> 	set vs. setf
C01114 00338	∂25-Aug-82  1434	Earl A. Killian            <Killian at MIT-MULTICS> 	SET vs. SETF
C01117 00339	∂25-Aug-82  1442	Scott E. Fahlman <Fahlman at Cmu-20c> 	SETF, case, etc.
C01121 00340	∂25-Aug-82  1450	Howard I. Cannon <HIC at MIT-MC> 	Issue #106 
C01123 00341	∂25-Aug-82  1511	Earl A. Killian            <Killian at MIT-MULTICS> 	SETF, case, etc. 
C01126 00342	∂25-Aug-82  1757	Jim Large <LARGE at CMU-20C> 	SETF, case, etc.    
C01127 00343	∂25-Aug-82  2013	Scott E. Fahlman <Fahlman at Cmu-20c> 	SET   
C01129 00344	∂25-Aug-82  2328	Kim.jkf at Berkeley 	case sensitivity, reply to comments    
C01136 00345	∂25-Aug-82  2358	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	Splicing reader macros  
C01139 00346	∂26-Aug-82  0014	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	CHECK-ARG-TYPE
C01142 00347	∂26-Aug-82  0041	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	Access to documentation strings   
C01148 00348	∂26-Aug-82  0058	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	function specs
C01155 00349	∂26-Aug-82  0934	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	assert
C01156 00350	∂26-Aug-82  0939	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	a protest  
C01157 00351	∂26-Aug-82  1059	Guy.Steele at CMU-10A 	Closures    
C01158 00352	∂26-Aug-82  1119	David.Dill at CMU-10A (L170DD60) 	splicing macros 
C01159 00353	∂26-Aug-82  1123	Scott E. Fahlman <Fahlman at Cmu-20c> 	Closures   
C01162 00354	∂26-Aug-82  1219	Scott E. Fahlman <Fahlman at Cmu-20c> 	Closures (addendum)  
C01165 00355	∂26-Aug-82  1343	Jonathan Rees <Rees at YALE> 	Closures  
C01168 00356	∂26-Aug-82  1428	mike at RAND-UNIX 	RE: CASE SENSITIVITY, REPLY TO COMMENTS  
C01170 00357	∂26-Aug-82  1521	Jonathan Rees <Rees at YALE> 	Closures  
C01173 00358	∂26-Aug-82  1601	Earl A. Killian <EAK at MIT-MC> 	ASSERT 
C01174 00359	∂26-Aug-82  1602	David A. Moon <MOON at MIT-MC> 	2nd generation LOOP macro   
C01184 00360	∂26-Aug-82  1633	Scott E. Fahlman <Fahlman at Cmu-20c> 	CASE SENSITIVITY, REPLY TO COMMENTS 
C01188 00361	∂26-Aug-82  2123	Scott E. Fahlman <Fahlman at Cmu-20c> 	Splicing reader macros    
C01189 00362	∂26-Aug-82  2123	Scott E. Fahlman <Fahlman at Cmu-20c> 	2nd generation LOOP macro 
C01191 00363	∂26-Aug-82  2128	mike at RAND-UNIX 	Re: CASE SENSITIVITY, REPLY TO COMMENTS  
C01193 00364	∂26-Aug-82  2144	Kim.fateman at Berkeley  
C01195 00365	∂26-Aug-82  2149	Scott E. Fahlman <Fahlman at Cmu-20c> 	Access to documentation strings
C01197 00366	∂26-Aug-82  2149	Scott E. Fahlman <Fahlman at Cmu-20c> 	function specs  
C01199 00367	∂27-Aug-82  0924	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	ASSERT
C01200 00368	∂27-Aug-82  1059	MOON at SCRC-TENEX 	2nd generation LOOP macro
C01201 00369	∂27-Aug-82  1140	MOON at SCRC-TENEX 	splicing reader macros   
C01202 00370	∂27-Aug-82  1140	MOON at SCRC-TENEX 	assert    
C01203 00371	∂27-Aug-82  1141	MOON at SCRC-TENEX 	case sensitivity    
C01205 00372	∂27-Aug-82  1140	MOON at SCRC-TENEX 	dynamic closures    
C01208 00373	∂27-Aug-82  1219	MOON at SCRC-TENEX 	function specs 
C01210 00374	∂27-Aug-82  1505	Richard M. Stallman <RMS at MIT-OZ at MIT-AI> 	SET
C01211 00375	∂27-Aug-82  1647	Richard M. Stallman <RMS at MIT-ML>
C01212 00376	∂27-Aug-82  1829	JLK at SCRC-TENEX 	2nd generation LOOP macro 
C01214 00377	∂28-Aug-82  0449	Scott E. Fahlman <Fahlman at Cmu-20c> 	2nd generation LOOP macro 
C01218 00378	∂28-Aug-82  0848	MOON at SCRC-TENEX 	Yellow pages   
C01219 00379	∂28-Aug-82  0853	MOON at SCRC-TENEX 	Order of arguments to ARRAY-DIMENSION   
C01220 00380	∂28-Aug-82  1032	Scott E. Fahlman <Fahlman at Cmu-20c> 	Yellow pages    
C01224 00381	∂28-Aug-82  1100	MOON at SCRC-TENEX 	Order of arguments to ARRAY-DIMENSION   
C01225 00382	∂28-Aug-82  1312	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	COMPILE-FILE  
C01227 00383	∂28-Aug-82  1821	FEINBERG at CMU-20C 	2nd generation LOOP macro    
C01229 00384	∂28-Aug-82  2049	Scott E. Fahlman <Fahlman at Cmu-20c> 	Closures   
C01231 00385	∂29-Aug-82  0028	ucbvax:<Kim:jkf> (John Foderaro) 	cases. reader poll   
C01236 00386	∂29-Aug-82  0839	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	Circular structure printing    
C01238 00387	∂29-Aug-82  0853	Scott E. Fahlman <Fahlman at Cmu-20c> 	Circular structure printing    
C01240 00388	∂29-Aug-82  0958	Scott E. Fahlman <Fahlman at Cmu-20c> 	cases. reader poll   
C01245 00389	∂29-Aug-82  1007	Scott E. Fahlman <Fahlman at Cmu-20c> 	case-sensitivity and portability    
C01252 00390	∂29-Aug-82  1027	David.Dill at CMU-10A (L170DD60) 	keyword args to load 
C01253 00391	∂29-Aug-82  1153	MOON at SCRC-TENEX 	keyword args to load
C01255 00392	∂29-Aug-82  1205	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	Circular structure printing    
C01256 00393	∂29-Aug-82  1221	Earl A. Killian <EAK at MIT-MC> 	SET    
C01257 00394	∂29-Aug-82  1502	Scott E. Fahlman <Fahlman at Cmu-20c> 	function specs  
C01261 00395	∂29-Aug-82  1820	Kim.fateman at Berkeley  
C01265 00396	∂29-Aug-82  1830	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	macro expansion    
C01272 00397	∂29-Aug-82  2034	Scott E. Fahlman <Fahlman at Cmu-20c>   
C01275 00398	∂29-Aug-82  2056	Scott E. Fahlman <Fahlman at Cmu-20c> 	macro expansion 
C01277 00399	∂29-Aug-82  2141	Kent M. Pitman <KMP at MIT-MC>
C01282 00400	∂29-Aug-82  2148	Kent M. Pitman <KMP at MIT-MC> 	Access to documentation strings  
C01284 00401	∂29-Aug-82  2337	Kent M. Pitman <KMP at MIT-MC> 	No PRINT-time case conversion switch please!    
C01288 00402	∂30-Aug-82  0007	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 
C01295 00403	∂30-Aug-82  0748	Masinter at PARC-MAXC 	Object type names
C01297 00404	∂30-Aug-82  0914	ROD  	LOOP and white pages.   
C01300 00405	∂30-Aug-82  0905	Masinter at PARC-MAXC 	case-sensitivity: a modest proposal  
C01302 00406	∂30-Aug-82  0910	Masinter at PARC-MAXC 	Re: Circular structure printing 
C01304 00407	∂30-Aug-82  0913	Dave Dyer       <DDYER at USC-ISIB> 	note on portability    
C01306 00408	∂30-Aug-82  0957	Dave Dyer       <DDYER at USC-ISIB> 	Circular structure printing 
C01307 00409	∂30-Aug-82  1032	Scott E. Fahlman <Fahlman at Cmu-20c> 	No PRINT-time case conversion switch please!  
C01310 00410	∂30-Aug-82  1124	Scott E. Fahlman <Fahlman at Cmu-20c> 	No PRINT-time case conversion switch please!  
C01313 00411	∂30-Aug-82  1234	Kent M. Pitman <KMP at MIT-MC> 	Access to documentation strings  
C01316 00412	∂30-Aug-82  1327	Scott E. Fahlman <Fahlman at Cmu-20c> 	Access to documentation strings
C01318 00413	∂30-Aug-82  1428	Alan Bawden <ALAN at MIT-MC> 	misinformation about LOOP
C01321 00414	∂30-Aug-82  1642	JonL at PARC-MAXC 	Re: byte specifiers  
C01323 00415	∂30-Aug-82  1654	JonL at PARC-MAXC 	Re: a protest   
C01325 00416	∂31-Aug-82  0756	Scott E. Fahlman <Fahlman at Cmu-20c> 	Masinter's proposal on case    
C01327 00417	∂31-Aug-82  0812	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	2nd generation LOOP macro 
C01331 00418	∂31-Aug-82  0816	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>   
C01333 00419	∂31-Aug-82  0823	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	function specs  
C01336 00420	∂31-Aug-82  0841	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	case-sensitivity and portability    
C01339 00421	∂31-Aug-82  0906	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	LOOP and white pages.     
C01342 00422	∂31-Aug-82  1441	MOON at SCRC-TENEX 	Re: a protest  
C01345 00423	∂31-Aug-82  1517	MOON at SCRC-TENEX 	Agenda item 61 
C01347 00424	∂31-Aug-82  1538	MOON at SCRC-TENEX 	LOAD-BYTE and DEPOSIT-BYTE    
C01348 00425	∂31-Aug-82  1850	Masinter at PARC-MAXC 	Re: case-sensitivity and portability 
C01350 00426	∂31-Aug-82  1952	Scott E. Fahlman <Fahlman at Cmu-20c> 	LOAD-BYTE and DEPOSIT-BYTE
C01351 00427	∂31-Aug-82  2342	Earl A. Killian <EAK at MIT-MC> 	lambda 
C01353 00428	∂01-Sep-82  0046	Kent M. Pitman <KMP at MIT-MC> 	'(LAMBDA ...)
C01356 00429	∂01-Sep-82  0252	DLW at MIT-MC 	lambda    
C01358 00430	∂01-Sep-82  1259	Earl A. Killian            <Killian at MIT-MULTICS> 	lambda 
C01360 00431	∂02-Sep-82  0827	jkf at mit-vax at mit-xx 	Masinter's modest proposal   
C01362 00432	∂02-Sep-82  0919	Richard E. Zippel <RZ at MIT-MC> 	case-sensitivity: a modest proposal 
C01363 00433	∂02-Sep-82  1033	JonL at PARC-MAXC 	Re: SETF and friends [and the "right" name problem]
C01366 00434	∂02-Sep-82  1146	JonL at PARC-MAXC 	Re: a miscellany of your comments   
C01369 00435	∂02-Sep-82  1230	JonL at PARC-MAXC 	Re: CHECK-ARG-TYPE [and CHECK-SUBSEQUENCE]    
C01372 00436	∂02-Sep-82  1246	JonL at PARC-MAXC 	Re: Access to documentation strings 
C01375 00437	∂02-Sep-82  1325	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	string-out 
C01376 00438	∂02-Sep-82  1331	JonL at PARC-MAXC 	Re: 2nd generation LOOP macro  
C01379 00439	∂02-Sep-82  1343	FEINBERG at CMU-20C 	Loop vs Do    
C01380 00440	∂02-Sep-82  1348	MOON at SCRC-TENEX 	Re: CHECK-ARG-TYPE [and CHECK-SUBSEQUENCE]   
C01381 00441	∂02-Sep-82  1349	MOON at SCRC-TENEX 	Re: Access to documentation strings
C01382 00442	∂02-Sep-82  1409	MOON at SCRC-TENEX 	case-sensitivity: a modest proposal
C01383 00443	∂02-Sep-82  1428	MOON at SCRC-TENEX 	Loop vs Do
C01385 00444	∂02-Sep-82  1443	ucbvax:<Kim:jkf> (John Foderaro) 	Re: case-sensitivity: a modest proposal  
C01387 00445	∂02-Sep-82  1525	JonL at PARC-MAXC 	Re: Circular structure printing
C01391 00446	∂02-Sep-82  1815	JonL at PARC-MAXC 	Re: LOAD-BYTE and DEPOSIT-BYTE 
C01392 00447	∂02-Sep-82  1809	JonL at PARC-MAXC 	Re: macro expansion  
C01399 00448	∂02-Sep-82  1955	Kim.fateman at Berkeley 	dlw's portability semantics   
C01401 00449	∂02-Sep-82  2027	ucbvax:<Kim:jkf> (John Foderaro) 	scott's message about case sensitivity   
C01403 00450	∂02-Sep-82  2211	Kent M. Pitman <KMP at MIT-MC> 	It's not just "LOOP vs DO"...    
C01406 00451	∂02-Sep-82  2300	Kent M. Pitman <KMP at MIT-MC>
C01411 00452	∂03-Sep-82  0210	David A. Moon <Moon at SCRC-POINTER at MIT-MC> 	case-sensitivity: an immodest proposal    
C01414 00453	∂03-Sep-82  0827	HEDRICK at RUTGERS (Mgr DEC-20s/Dir LCSR Comp Facility) 	administrative request 
C01415 00454	∂03-Sep-82  1012	ucbvax:<Kim:jkf> (John Foderaro) 	cases, re: kmp's and moon's mail    
C01420 00455	∂03-Sep-82  1020	Scott E. Fahlman <Fahlman at Cmu-20c> 	cases, re: kmp's and moon's mail    
C01422 00456	∂03-Sep-82  1452	ucbvax:<Kim:jkf> (John Foderaro) 	Re: cases, re: kmp's and moon's mail
C01424 00457	∂03-Sep-82  1519	Guy.Steele at CMU-10A 	REDUCE function re-proposed
C01427 00458	∂03-Sep-82  1520	Guy.Steele at CMU-10A    
C01430 00459	∂03-Sep-82  1520	Guy.Steele at CMU-10A 	Backquote proposal per issue 99 
C01439 00460	∂03-Sep-82  1527	Kent M. Pitman <KMP at MIT-MC> 	More case stuff: speed and accuracy   
C01443 00461	∂03-Sep-82  1551	Guy.Steele at CMU-10A 	DLW query about STRING-OUT and LINE-OUT   
C01444 00462	∂03-Sep-82  1739	JonL at PARC-MAXC 	Re: function specs   
C01447 00463	∂03-Sep-82  1911	MOON at SCRC-TENEX 	REDUCE function re-proposed   
C01448 00464	∂03-Sep-82  1912	MOON at SCRC-TENEX 	Backquote proposal per issue 99    
C01450 00465	∂03-Sep-82  2015	Guy.Steele at CMU-10A 	Backquote proposal    
C01452 00466	∂03-Sep-82  2041	Guy.Steele at CMU-10A 	Clarification of closures and GO
C01460 00467	∂03-Sep-82  2125	STEELE at CMU-20C 	Proposed definition of SUBST   
C01463 00468	∂03-Sep-82  2134	STEELE at CMU-20C 	Another try at SUBST 
C01465 00469	∂03-Sep-82  2139	STEELE at CMU-20C 	Flying off the handle: one more time on SUBST 
C01467 00470	∂03-Sep-82  2136	MOON at SCRC-TENEX 	Agenda Item 74: Interaction of BLOCK and RETURN   
C01476 00471	∂03-Sep-82  2150	Skef Wholey <Wholey at CMU-20C> 	Proposed definition of SUBST, standard identity function 
C01478 00472	∂03-Sep-82  2202	Scott E. Fahlman <Fahlman at Cmu-20c> 	Proposed definition of SUBST   
C01480 00473	∂03-Sep-82  2224	Guy.Steele at CMU-10A 	SUBST  
C01481 00474	∂03-Sep-82  2307	Kent M. Pitman <KMP at MIT-MC> 	Writing PROG as a macro
C01488 00475	∂03-Sep-82  2332	Kent M. Pitman <KMP at MIT-MC> 	Proposed definition of SUBST
C01490 00476	∂04-Sep-82  0608	MOON at SCRC-TENEX 	Clarification of full funarging and spaghetti stacks   
C01491 00477	∂04-Sep-82  0659	TK at MIT-MC   
C01494 00478	∂04-Sep-82  1946	Guy.Steele at CMU-10A 	Mailing list
C01495 00479	∂04-Sep-82  2012	Guy.Steele at CMU-10A 	Re: Clarification of full funarging and spaghetti stacks 
C01497 00480	∂07-Sep-82  1341	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	lambda
C01498 00481	∂07-Sep-82  1350	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	case-sensitivity: an immodest proposal   
C01500 00482	∂07-Sep-82  1500	Kim.fateman at Berkeley 	Another modest proposal  
C01502 00483	∂07-Sep-82  1513	Guy.Steele at CMU-10A 	Re: REDUCE function re-proposed 
C01504 00484	∂07-Sep-82  1513	Kim.fateman at Berkeley 	Another modest proposal  
C01506 00485	∂07-Sep-82  1521	Guy.Steele at CMU-10A 	forgot to CC this
C01510 00486	∂07-Sep-82  1551	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	REDUCE function re-proposed    
C01512 00487	∂07-Sep-82  1641	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	dlw's portability semantics    
C01514 00488	∂07-Sep-82  1641	Jim Large <LARGE at CMU-20C> 	case flames    
C01516 00489	∂07-Sep-82  1648	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	DLW query about STRING-OUT and LINE-OUT  
C01518 00490	∂07-Sep-82  1648	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	Clarification of full funarging and spaghetti stacks    
C01520 00491	∂07-Sep-82  2023	Scott E. Fahlman <Fahlman at Cmu-20c> 	DLW query about STRING-OUT and LINE-OUT  
C01521 00492	∂07-Sep-82  2048	Richard E. Zippel <RZ at MIT-MC> 	Another modest proposal   
C01523 00493	∂07-Sep-82  2126	Scott E. Fahlman <Fahlman at Cmu-20c> 	Vote on Cases   
C01526 00494	∂07-Sep-82  2236	Scott E. Fahlman <Fahlman at Cmu-20c> 	Array proposal (long msg) 
C01535 00495	∂07-Sep-82  2341	UCB-KIM:jkf (John Foderaro) 	results of a case poll    
C01540 00496	∂08-Sep-82  1018	RPG   via S1-GATEWAY 	Case vote    
C01541 00497	∂08-Sep-82  1012	Jonathan Rees <Rees at YALE> 	Vote on Cases  
C01543 00498	∂08-Sep-82  1228	FEINBERG at CMU-20C 	Vote on Cases 
C01544 00499	∂08-Sep-82  1552	MOON at SCRC-TENEX 	Array proposal 
C01547 00500	∂08-Sep-82  2334	Kent M. Pitman <KMP at MIT-MC> 	PRINT/READ inversion   
C01551 00501	∂09-Sep-82  0014	Kent M. Pitman <KMP at MIT-MC> 	Array proposal    
C01555 00502	∂09-Sep-82  0232	Jeffrey P. Golden <JPG at MIT-MC> 	Vote on Cases  
C01556 00503	∂09-Sep-82  1142	Scott E. Fahlman <Fahlman at Cmu-20c> 	Printing Arrays 
C01558 00504	∂09-Sep-82  1611	Martin.Griss <Griss at UTAH-20> 	Case   
C01559 00505	∂10-Sep-82  2233	Robert W. Kerns <RWK at SCRC-TENEX at MIT-MC> 	Re: SETF and friends [and the "right" name problem]  
C01561 00506	∂11-Sep-82  0420	DLW at MIT-MC 	Vote 
C01562 00507	∂11-Sep-82  0435	DLW at MIT-MC 	Array proposal 
C01564 00508	∂11-Sep-82  0446	DLW at MIT-MC 	Array proposal (long msg)
C01566 00509	∂11-Sep-82  0446	DLW at MIT-MC 	Printing Arrays
C01570 00510	∂11-Sep-82  1355	STEELE at CMU-20C 	Proposal for ENDP    
C01572 00511	∂11-Sep-82  1500	Glenn S. Burke <GSB at MIT-ML> 	Vote    
C01573 00512	∂11-Sep-82  1537	Kent M. Pitman <KMP at MIT-MC> 	ENDP    
C01575 00513	∂11-Sep-82  1649	STEELE at CMU-20C 	Proposal for ENDP    
C01577 00514	∂11-Sep-82  2155	Guy.Steele at CMU-10A 	KMP's remarks about ENDP   
C01578 00515	∂12-Sep-82  0054	Guy.Steele at CMU-10A 	???    
C01584 00516	∂12-Sep-82  0541	DLW at MIT-MC 	???  
C01586 00517	∂12-Sep-82  1252	Scott E. Fahlman <Fahlman at Cmu-20c> 	ENDP and LET*   
C01588 00518	∂12-Sep-82  1252	Scott E. Fahlman <Fahlman at Cmu-20c> 	ENDP and LET*   
C01590 00519	∂12-Sep-82  1333	MOON at SCRC-TENEX 	ENDP optional 2nd arg    
C01591 00520	∂12-Sep-82  1435	Scott E. Fahlman <Fahlman at Cmu-20c> 	Case  
C01592 00521	∂12-Sep-82  1532	UCBKIM.jkf@Berkeley 	Re: Case 
C01595 00522	∂12-Sep-82  1623	RPG  	Vectors versus Arrays   
C01601 00523	∂12-Sep-82  1828	MOON at SCRC-TENEX 	Vectors versus Arrays    
C01604 00524	∂12-Sep-82  2022	Guy.Steele at CMU-10A 	??? (that is, LET and LET*)
C01605 00525	∂12-Sep-82  2114	Guy.Steele at CMU-10A 	Re: Case    
C01608 00526	∂12-Sep-82  2131	Scott E. Fahlman <Fahlman at Cmu-20c> 	RPG on Vectors versus Arrays   
C01614 00527	∂12-Sep-82  2043	Guy.Steele at CMU-10A 	Job change for Quux   
C01618 00528	∂13-Sep-82  0016	RPG  	Mail duplications  
C01620 00529	∂13-Sep-82  1133	RPG  	Reply to Moon on `Vectors versus Arrays'    
C01623 00530	∂13-Sep-82  1159	Kim.fateman@Berkeley 	vectors, arrays, etc   
C01624 00531	∂13-Sep-82  1354	UCBKIM.jkf@Berkeley 	Re:  Re: Case 
C01626 00532	∂13-Sep-82  1607	Masinter at PARC-MAXC 	Re: Case    
C01628 00533	∂13-Sep-82  1635	Kent M. Pitman <KMP at MIT-MC>
C01631 00534	∂13-Sep-82  2012	JonL at PARC-MAXC 	Re: Clarification of full funarging and spaghetti stacks
C01634 00535	∂13-Sep-82  2230	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	Reply to Gabriel on `Vectors versus Arrays'      
C01636 00536	∂14-Sep-82  1823	JonL at PARC-MAXC 	Re: `Vectors versus Arrays',  and the original compromise    
C01642 00537	∂14-Sep-82  1835	JonL at PARC-MAXC 	Desensitizing case-sensitivity 
C01649 00538	∂15-Sep-82  0824	Guy.Steele at CMU-10A 	Case usage in CL manual    
C01651 00539	∂15-Sep-82  1012	Martin.Griss <Griss at UTAH-20> 	Re: Case    
C01652 00540	∂15-Sep-82  1343	Jeffrey P. Golden <JPG at MIT-MC>  
C01654 00541	∂15-Sep-82  1752	Scott E. Fahlman <Fahlman at Cmu-20c> 	OPTIMIZE Declaration 
C01662 00542	∂15-Sep-82  1931	MOON at SCRC-TENEX 	OPTIMIZE Declaration
C01664 00543	∂15-Sep-82  1952	Earl A. Killian <EAK at MIT-MC> 	OPTIMIZE Declaration  
C01665 00544	∂15-Sep-82  2020	Scott E. Fahlman <Fahlman at Cmu-20c> 	OPTIMIZE Declaration 
C01667 00545	∂16-Sep-82  0112	Kent M. Pitman <KMP at MIT-MC>
C01675 00546	∂16-Sep-82  0133	MOON at SCRC-TENEX 	Case usage in CL manual  
C01677 00547	∂16-Sep-82  0133	MOON at SCRC-TENEX 	Hairiness of arrays 
C01680 00548	∂16-Sep-82  0145	MOON at SCRC-TENEX 	Hairiness of arrays 
C01683 00549	∂16-Sep-82  0216	JoSH <JoSH at RUTGERS> 	array hairiness 
C01684 00550	∂16-Sep-82  0353	DLW at MIT-MC 	Hairiness of arrays 
C01687 00551	∂16-Sep-82  0751	Masinter at PARC-MAXC 	Re: #-, #+  
C01689 00552	∂16-Sep-82  0808	Scott E. Fahlman <Fahlman at Cmu-20c> 	Array Displacement   
C01690 00553	∂16-Sep-82  1011	RPG  	Vectors versus Arrays (concluded) 
C01692 00554	∂16-Sep-82  1216	Earl A. Killian            <Killian at MIT-MULTICS> 	arrays 
C01694 00555	∂16-Sep-82  1308	Guy.Steele at CMU-10A 	Indirect arrays  
C01697 00556	∂16-Sep-82  2039	Kent M. Pitman <KMP at MIT-MC> 	Portable declarations  
C01700 00557	∂16-Sep-82  2028	Scott E. Fahlman <Fahlman at Cmu-20c> 	Revised array proposal (long)  
C01712 00558	∂16-Sep-82  2049	Rodney A. Brooks <BROOKS at MIT-OZ at MIT-MC> 	Re: Revised array proposal (long)
C01714 00559	∂16-Sep-82  2051	Scott E. Fahlman <Fahlman at Cmu-20c> 	Portable declarations
C01715 00560	∂16-Sep-82  2207	Kent M. Pitman <KMP at MIT-MC>
C01718 00561	∂16-Sep-82  2330	Glenn S. Burke <GSB at MIT-ML> 	array proposal    
C01721 00562	∂17-Sep-82  1235	STEELE at CMU-20C 	Proposed evaluator for Common LISP (very long)
C01766 00563	Simple Switch Proposal
C01778 00564	∂17-Sep-82  1336	Rodney A. Brooks <BROOKS at MIT-OZ at MIT-MC> 	Re: Revised array proposal (long)
C01780 00565	∂17-Sep-82  1451	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	arrays
C01782 00566	∂17-Sep-82  1450	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	Revised array proposal (long)  
C01783 00567	∂17-Sep-82  1741	David.Dill at CMU-10A (L170DD60) 	array proposal  
C01785 00568	∂17-Sep-82  1803	Kent M. Pitman <KMP at MIT-MC> 	EQUAL descending arrays
C01786 00569	∂17-Sep-82  1831	David.Dill at CMU-10A (L170DD60) 	equal descending into SEQUENCES
C01787 00570	∂18-Sep-82  0225	Richard M. Stallman <RMS at MIT-AI> 	Portable declarations  
C01788 00571	∂18-Sep-82  1521	Earl A. Killian <EAK at MIT-MC> 	Proposed evaluator for Common LISP -- declarations  
C01790 00572	∂18-Sep-82  1546	Earl A. Killian <EAK at MIT-MC> 	Proposed evaluator for Common LISP -- declarations  
C01793 00573	∂18-Sep-82  1555	Earl A. Killian <EAK at MIT-MC> 	declarations
C01795 00574	∂18-Sep-82  2117	MOON at SCRC-TENEX 	Declarations from macros 
C01797 00575	∂18-Sep-82  2122	MOON at SCRC-TENEX 	Indirect arrays
C01799 00576	∂18-Sep-82  2207	Richard M. Stallman <RMS at MIT-AI> 	Printing Arrays   
C01801 00577	∂18-Sep-82  2310	Richard M. Stallman <RMS at MIT-AI> 	case    
C01802 00578	∂19-Sep-82  0007	MOON at SCRC-TENEX 	Printing Arrays
C01803 00579	∂19-Sep-82  0032	Kent M. Pitman <KMP at MIT-MC> 	Minor changes to proposed reader syntax    
C01809 00580	∂19-Sep-82  0038	Kent M. Pitman <KMP at MIT-MC>
C01812 00581	∂19-Sep-82  1216	Guy.Steele at CMU-10A 	Reply to msg by ALAN about PROG 
C01814 00582	∂19-Sep-82  1549	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	Minor changes to proposed reader syntax
C01817 00583	∂19-Sep-82  1645	Kent M. Pitman <KMP at MIT-MC>
C01820 00584	∂19-Sep-82  1655	Kent M. Pitman <KMP at MIT-MC>
C01823 00585	∂19-Sep-82  1905	Richard M. Stallman <RMS at MIT-OZ at MIT-MC> 	MEMBER and ASSOC vs EQL
C01826 00586	∂19-Sep-82  1934	Scott E. Fahlman <Fahlman at Cmu-20c> 	Minor changes to proposed reader syntax  
C01828 00587	∂19-Sep-82  2219	RMS at MIT-MC  
C01830 00588	∂19-Sep-82  2246	Alan Bawden <ALAN at MIT-MC> 	RETURN in BLOCK and PROG 
C01832 00589	∂20-Sep-82  0654	DLW at MIT-MC 	Proposed evaluator for Common LISP -- declarations
C01833 00590	∂20-Sep-82  0654	DLW at MIT-MC 	Minor changes to proposed reader syntax 
C01835 00591	∂20-Sep-82  1031	RPG  	Vectors and Arrays (Reprise) 
C01836 00592	∂20-Sep-82  1039	RPG  	Declarations and Ignorance   
C01838 00593	∂20-Sep-82  1151	Kent M. Pitman <KMP at MIT-MC> 	VAR-TYPE
C01839 00594	∂20-Sep-82  1445	Earl A. Killian            <Killian at MIT-MULTICS> 	declarations
C01842 00595	∂20-Sep-82  1456	Guy.Steele at CMU-10A 	Getting the type of a variable  
C01844 00596	∂20-Sep-82  1710	MOON at SCRC-TENEX 	Bit vectors    
C01846 00597	∂21-Sep-82  0938	Scott E. Fahlman <Fahlman at Cmu-20c> 	Indented Strings
C01849 00598	∂21-Sep-82  1101	DLW at SCRC-TENEX 	declarations    
C01851 00599	∂21-Sep-82  1138	Andy Freeman <CSD.FREEMAN at SU-SCORE> 	Hash table functions
C01853 00600	∂21-Sep-82  1322	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	Hash table functions not all there
C01855 00601	∂21-Sep-82  1347	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	LEXICAL declarations 
C01856 00602	∂21-Sep-82  1409	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	Indented Strings   
C01859 00603	RPG Memorial Proposal
C01871 00604	∂23-Sep-82  0449	DLW at MIT-MC 	Arrays and vectors (again)    
C01872 00605	∂23-Sep-82  0702	Leonard N. Zubkoff <Zubkoff at Cmu-20c> 	Arrays and vectors (again)   
C01873 00606	∂23-Sep-82  0929	Scott E. Fahlman <Fahlman at Cmu-20c> 	Arrays and vectors (again)
C01876 00607	∂25-Sep-82  0338	Kent M. Pitman <KMP at MIT-MC> 	Arrays and Vectors
C01883 00608	∂25-Sep-82  0716	Guy.Steele at CMU-10A 	KMP's remarks on arrays    
C01884 00609	∂26-Sep-82  1958	Scott E. Fahlman <Fahlman at Cmu-20c> 	Reply to KMP    
C01888 00610	∂26-Sep-82  2128	STEELE at CMU-20C 	Revised proposed evaluator(s)  
C01973 00611	∂26-Sep-82  2231	Kent M. Pitman <KMP at MIT-MC> 	Indeed, one of us must be confused.   
C01978 00612	∂27-Sep-82  0031	Alan Bawden <ALAN at MIT-MC> 	What is this RESTART kludge?  
C01984 00613	∂27-Sep-82  1848	Scott E. Fahlman <Fahlman at Cmu-20c> 	Indeed, one of us must be confused. 
C01987 00614	∂27-Sep-82  2014	Scott E. Fahlman <Fahlman at Cmu-20c> 	What is this RESTART kludge?   
C01990 00615	∂27-Sep-82  2106	Guy.Steele at CMU-10A 	RESTART and TAGBODY   
C01992 00616	∂28-Sep-82  0601	DLW at MIT-MC 	Arrays and vectors (again)    
C01995 00617	∂28-Sep-82  0614	DLW at MIT-MC 	What is this RESTART kludge?  
C01996 00618	∂28-Sep-82  0616	DLW at MIT-MC 	Indeed, one of us must be confused.
C01999 00619	∂28-Sep-82  0421	KMP at MIT-MC  
C02032 00620	∂28-Sep-82  1753	Scott E. Fahlman <Fahlman at Cmu-20c> 	Arrays and vectors (again)
C02036 00621	∂29-Sep-82  0515	Ginder at CMU-20C 	Re: Arrays and vectors (again) 
C02037 00622	∂29-Sep-82  0635	Scott E. Fahlman <Fahlman at Cmu-20c> 	Arrays and vectors (again)
C02038 00623	∂29-Sep-82  0654	Scott E. Fahlman <Fahlman at Cmu-20c> 	Arrays and Vectors   
C02041 00624	∂29-Sep-82  0707	Scott E. Fahlman <Fahlman at Cmu-20c> 	Arrays and Vectors   
C02044 00625	∂29-Sep-82  0825	Ginder at CMU-20C 	Re: Arrays and vectors (again) 
C02046 00626	∂29-Sep-82  0940	HEDRICK at RUTGERS (Mgr DEC-20s/Dir LCSR Comp Facility) 	Re: Arrays and Vectors 
C02048 00627	∂29-Sep-82  0956	Guy.Steele at CMU-10A 	Design of Common LISP 
C02050 00628	∂29-Sep-82  1127	RPG  	Proposals
C02051 00629	∂29-Sep-82  1328	Scott E. Fahlman <Fahlman at Cmu-20c> 	Arrays and Vectors   
C02054 00630	∂29-Sep-82  1321	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	Re: Arrays and vectors (again) 
C02057 00631	∂29-Sep-82  1721	Alan Bawden <ALAN at MIT-MC> 	What is this RESTART kludge?  
C02060 00632	∂29-Sep-82  1726	Brian G. Milnes <Milnes at CMU-20C> 	Issue 82 of the last CL meeting  
C02063 00633	∂29-Sep-82  1753	Scott E. Fahlman <Fahlman at Cmu-20c> 	What is this RESTART kludge?   
C02066 00634	∂29-Sep-82  1946	Kent M. Pitman <KMP at MIT-MC>
C02068 00635	∂29-Sep-82  1955	Skef Wholey <Wholey at CMU-20C> 	MAKE as a new name for SETF (gasp!)  
C02070 00636	∂29-Sep-82  2036	Scott E. Fahlman <Fahlman at Cmu-20c>   
C02072 00637	∂29-Sep-82  2104	Kent M. Pitman <KMP at MIT-MC>
C02074 00638	∂29-Sep-82  2107	Scott E. Fahlman <Fahlman at Cmu-20c> 	MAKE as a new name for SETF (gasp!) 
C02075 00639	∂29-Sep-82  2330	Alan Bawden <ALAN at MIT-MC> 	What is this RESTART kludge?  
C02078 00640	∂29-Sep-82  2349	MOON at SCRC-TENEX 	Issue 82 of the last CL meeting    
C02084 00641	∂29-Sep-82  2349	MOON at SCRC-TENEX 	arrays and vectors  (long carefully-thought-out message)    
C02094 00642	∂30-Sep-82  0244	Kent M. Pitman <KMP at MIT-MC> 	Vectors/Arrays    
C02095 00643	∂30-Sep-82  0309	Kent M. Pitman <KMP at MIT-MC> 	RESTART 
C02098 00644	∂30-Sep-82  0329	MOON at SCRC-TENEX 	RESTART   
C02099 00645	∂30-Sep-82  0921	Glenn S. Burke <GSB at MIT-ML> 	vectors/arrays    
C02100 00646	∂30-Sep-82  1034	Guy.Steele at CMU-10A 	Clarification    
C02101 00647	∂30-Sep-82  1333	MOON at SCRC-TENEX 	Issue 82 comment, your reply and number crunching 
C02105 00648	∂30-Sep-82  1333	MOON at SCRC-TENEX 	Issue #97, Colander page 134: floating-point assembly and disassembly 
C02114 00649	∂30-Sep-82  1404	Scott E. Fahlman <Fahlman at Cmu-20c> 	Issue 82 comment
C02116 00650	∂30-Sep-82  1447	Scott E. Fahlman <Fahlman at Cmu-20c> 	Down with RESTART    
C02119 00651	∂30-Sep-82  1535	Kent M. Pitman <KMP at MIT-MC> 	RESTART 
C02121 00652	∂30-Sep-82  1553	Scott E. Fahlman <Fahlman at Cmu-20c> 	RESTART    
C02123 00653	∂30-Sep-82  1601	Earl A. Killian <EAK at MIT-MC> 	arrays and vectors  (long carefully-thought-out message) 
C02124 00654	∂01-Oct-82  0107	Alan Bawden <ALAN at MIT-MC> 	DEFSTRUCT options syntax 
C02128 00655	∂01-Oct-82  0546	Scott E. Fahlman <Fahlman at Cmu-20c> 	DEFSTRUCT options syntax  
C02131 00656	∂01-Oct-82  1642	JMC  	setf → set    
C02132 ENDMK
C⊗;
∂30-Dec-81  1117	Guy.Steele at CMU-10A 	Text-file versions of DECISIONS and REVISIONS documents  
Date: 30 December 1981 1415-EST (Wednesday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Text-file versions of DECISIONS and REVISIONS documents
Message-Id: <30Dec81 141557 GS70@CMU-10A>

The files DECISIONS DOC and REVISIONS DOC  on directory  GLS;
at  MIT-MC  are available.  They are text files, as opposed to
PRESS files.  The former is 9958 lines long, and the latter is
1427.
--Guy

∂23-Dec-81  2255	Kim.fateman at Berkeley 	elementary functions
Date: 23 Dec 1981 22:48:00-PST
From: Kim.fateman at Berkeley
To: guy.steele@cmu-10a
Subject: elementary functions
Cc: Kim.jkf@UCB-C70, gjc@MIT-MC, griss@utah-20, jonl@MIT-MC, masinter@PARC-MAXC,
    rpg@SU-AI

I have no objection to making lisp work better with numerical computation.
I think that it is a far more complicated issue than you seem to think
to put in elementary functions.  Branch cuts are probably not hard.
APL's notion of a user-settable "fuzz" is gross.  Stan Brown's
model of arithmetic is (Ada notwithstanding) inadequate as a prescriptive
model (Brown agrees).  If you provide a logarithm function, are you
willing to bet that it will hold up to the careful scrutiny of people
like Kahan?
  
As for the vagaries of arithmetic in Franz, I hope such things will
get ironed out along with vagaries in the Berkeley UNIX system.  Kahan
and I intend to address such issues.  I think it is a mistake to
address such issues as LANGUAGE issues, though.

I have not seen Penfield's article (yet). 

As for the rational number implementation question, it seems to me
that implementation of rational numbers (as pairs) loses little by
being programmed in Lisp.  Writing bignums in lisp loses unless you
happen to have access to machine instructions like 64-bit divided by
32 bit, from Lisp.  

I would certainly like to see common lisp be successful;  if you
have specific plans for the arithmetic that you wish to get comments and/or
help on, please give them a wider circulation.  E.g. the IEEE
floating point committee might like to see how you might incorporate
good ideas in a language.
I would be glad to pass your plans on to them.

∂01-Jan-82  1600	Guy.Steele at CMU-10A 	Tasks: A Reminder and Plea 
Date:  1 January 1982 1901-EST (Friday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Tasks: A Reminder and Plea
Message-Id: <01Jan82 190137 GS70@CMU-10A>

At the November meeting, a number of issues were deferred with the
understanding that certain people would make concrete proposals for
consideration and inclusion in the second draft of the manual.  I
promised to get the second draft out in January, and to do that I need
those proposals pretty soon.  I am asking to get them in two weeks (by
January 15).  Ideally they would already be in SCRIBE format, but I'll
settle for any reasonable-looking ASCII file of text approximately in
the style of the manual.  BOLIO files are okay too; I can semi-automate
BOLIO to SCRIBE conversion.  I would prefer not to get rambling prose,
outlines, or sentence fragments; just nice, clean, crisp text that
requires only typographical editing before inclusion in the manual.
(That's the goal, anyway; I realize I may have to do some
industrial-strength editing for consistency.)  A list of the outstanding
tasks follows.

--Guy

GLS: Propose a method for allowing special forms to have a dual
implementation as both a macro (for user and compiler convenience)
and as a fexpr (for interpreter speed).  Create a list of primitive
special forms not easily reducible via macros to other primitives.
As part of this suggest an alternative to FUNCTIONP of two arguments.

MOON: Propose a rigorous mathematical formulation of the treatment
of the optional tolerance-specification argument for MOD and REMAINDER.
(I had a crack at this and couldn't figure it out, though I think I
came close.)

GLS: Propose specifications for lexical catch, especially a good name for it.

Everybody: Propose a clean and consistent declaration system.

MOON/DLW/ALAN: Propose a cleaned-up version of LOOP.  Alter it to handle
most interesting sequence operations gracefully.

SEF: Propose a complete set of keyword-style sequence operations.

GLS: Propose a set of functional-style sequence operations.

GJC/RLB: Polish the VAXMAX proposal for feature sets and #+ syntax.

ALAN: Propose a more extensible character-syntax definition system.

GLS: Propose a set of functions to interface to a filename/pathname
system in the spirit of the LISP Machine's.

LISPM: Propose a new error-handling system.

LISPM: Propose a new package system.


∂08-Dec-81  0650	Griss at UTAH-20 (Martin.Griss) 	PSL progress report   
Date:  8 Dec 1981 0743-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: PSL progress report
To: rpg at SU-AI
cc: griss at UTAH-20

How was common LISP meeting?
Did you meet Ohlander?

Excuse me if following remailed to you, seems to be a mailer bug:
                            PSL Interest Group
                              2 December 1981


     Since my last message at the end of October, we have made significant
progress on the VAX version of PSL. Most of the effort this last month has
been directed at VAX PSL, with some utility work on the DEC-20 and Apollo.
Please send a message if you wish to be removed from this mailing LIST, or
wish other names to be added.

	Martin L. Griss,
	CS Dept., 3160 MEB,
	University of Utah,
	Salt Lake City, Utah 84112.
	(801)-581-6542

--------------------------------------------------------------------------

Last month, we started the VAX macros and LAP to UNIX as converter in
earnest.  We used the PSL-20 V2 sources and PSL to MIDAS compiler c-macros
and tables as a guide. After some small pieces of code were tested, cross
compilation on the DEC-20 and assembly on the VAX proceeded full-bore. Just
before Thanksgiving, there was rapid progress resulting in the first
executing PSL on the VAX. This version consisted mostly of the kernel
modules of the PSL-20 version, without the garbage collector, resident LAP
and some debugging tools. Most of the effort in implementing these smaller
modules is the requirement for a small amount of LAP to provide the
compiled function/interpreted function interface, and efficient variable
binding operations.  The resident LAP has to be newly written for the VAX.
The c-macros and compiler of course have been fully tested in the process
of building the kernel.

It was decided to produce a new stop-and-copy (two space) collector for
PSL-VAX, to replace the PSL-20 compacting collector.  This collector was
written in about a day and tested by loading it into PSL-20 and dynamically
redefining the compacting collector. On the DEC-20, it seems about 50%
faster than the compacting collector, and MUCH simpler to maintain. It will
be used for the Extended addressing PSL-20. This garbage collector is now
in use with PSL-VAX.

Additional ("non-kernel") modules have also been incorporated in this
cross-compilation phase (they are normally loaded as LAP into PSL-20) to
provide a usable interpreted PSL. PSL-VAX V1 now runs all of the Standard
LISP test file, and most utility modules run interpretively (RLISP parser,
structure editor, etc).  We may compile the RLISP parser and support in the
next build and have a complete RLISP for use until we have resident LAP and
compiler.  The implementation of the resident LAP, a SYSCALL function, etc
should take a few weeks. One possibility is to look at the Franz LISP fasl
and object file loader, and consider using the Unix assembler in a lower
fork with a fasl loader.

Preliminary timings of small interpreted code segments indicate that this
version PSL runs somewhat slower than FranzLISP. There are functions that
are slower and functions that are faster (usually because of SYSLISP
constructs).  We will time some compiled code shortly (have to
cross-compile and link into kernel in current PSL) to identify good and bad
constructs.  We will also spend some time studying the code emitted, and
change the code-generator tables to produce the next version, which we
expect to be quite a bit faster. The current code generator does not use
any three address or indexing mode operations.

We will shortly concentrate on the first Apollo version of PSL.  We do not
expect any major surprises. Most of the changes from the PSL-20 system
(byte/word conflicts) have now been completely flushed out in the VAX
version.  The 68000 tables should be modeled very closely on the VAX
tables. The current Apollo assembler, file-transfer program, and debugger
are not as powerful as the corresponding VAX tools, and this will make work
a little harder. To compensate, there will be less source changes to check
out.



M
-------

Eric
Just finished my long trip plus recovery from East coast flu's etc. Can
you compile the TAK function for me using your portable compiler and send
me the code. Also, could you time it on (TAK 18. 12. 6.). Here's the code
I mean:

(defun tak (x y z)
       (cond ((not (< y x))
	      z)
	     (t (tak (tak (1- x) y z)
		     (tak (1- y) z x)
		     (tak (1- z) x y))))))

I'm in the process of purring together a synopsis of the results from
the meeting. In short, from your viewpoint, we decided that it would be
necessary for us (Common Lisp) to specify a small virtual machine and
for us to then supply to all interested parties the rest of the system
in Common Lisp code. This means that there would be a smallish number
of primitives that you would need to implement. I assume that this
is satisfactory for the Utah contingent. 

Unfortunately, a second meeting will be necessary to complete the agenda 
since we did not quite finish. In fact, I was unable to travel to
Washington on this account.
∂15-Dec-81  0829	Guy.Steele at CMU-10A 	Arrgghhh blag    
Date: 15 December 1981 1127-EST (Tuesday)
From: Guy.Steele at CMU-10A
To: rpg at SU-AI
Subject:  Arrgghhh blag
Message-Id: <15Dec81 112717 GS70@CMU-10A>

Foo.  I didn't want to become involved in an ANSI standard, and I have
told people so. or one thing, it looks like a power play and might
alienate people such as the InterLISP crowd, and I wouldn't blame them.
In any case, I don't think it is appropriate to consider this until
we at least have a full draft manual.  If MRG wants to fight that fight,
let him at it.
I am working on collating the bibliographic entries.  I have most of them
on-line already, but just have to convert from TJ6 to SC
RIBE format.  I agree that the abstract is not very exciting -- it is
practically stodgy.  I was hoping you would know how to give it some oomph,
some sparkle.  If not, we'll just send it out as is and try to sparkle up
the paper if it is accepted.  Your siggestions about explaining TNBIND
and having a diagram are good.
--Q

∂18-Dec-81  0918	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	information about Common Lisp implementation  
Date: 18 Dec 1981 1214-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: information about Common Lisp implementation
To: rpg at SU-AI, jonl at MIT-AI

We are about to sign a contract with DEC's LCG whereby they sponsor us
to produce an extended addressing Lisp.  We are still discussing whether
this should be Interlisp or Common Lisp.  I can see good arguments in
both directions, and do not have a strong perference, but I would
slightly prefer Common Lisp.  Do you know whether there are any
implementations of Common Lisp, or something reasonably close to it? I
am reconciled to producing my own "kernel", probably in assembly
language, though I have some other candidates in mind too. But I would
prefer not to have to do all of the Lisp code from scratch.

As you may know, DEC is probably going to support a Lisp for the VAX. My
guess is that we will be very likely to do the same dialect that  is
decided upon there.  The one exception would be if it looks like MIT (or
someone else) is going to do an extended implementation of Common Lisp.
If so, then we would probably do Interlisp, for completeness.

We have some experience in Lisp implementation now, since Elisp (the
extended implementation of Rutgers/UCI Lisp) is essentially finished.
(I.e. there are some extensions I want to put in, and some optimizations,
but it does allow any sane R/UCI Lisp code to run.) The interpreter now
runs faster than the original R/UCI lisp interpreter. Compiled code is
slightly slower, but we think this is due to the fact that we are not
yet compiling some things in line that should be. (Even CAR is not
always done in line!)  The compiler is Utah's portable compiler,
modified for the R/UCI Lisp dialect.  It does about what you would want
a Lisp compiler to do, except that it does not open code arithmetic
(though a later compiler has some abilities in that direction).  I
suspect that for a Common Lisp implementation we would try to use the
PDP-10 Maclisp compiler as a base, unless it is too crufty to understand
or modify.  Changing compilers to produce extended code turns out not to
be a very difficult job.
-------

∂21-Dec-81  0702	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Extended-addressing Common Lisp 
Date: 21 Dec 1981 0957-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: Extended-addressing Common Lisp
To: JONL at MIT-XX
cc: rpg at SU-AI
In-Reply-To: Your message of 18-Dec-81 1835-EST

thanks.  At the moment the problem is that DEC is not sure whether they
are interested in Common Lisp or Interlisp.  We will probably
follow the decision they make for the VAX, which should be done
sometime within a month.  What surprised me about that was from what I
can hear one of Interlisp's main advantages was supposed to be that the
project was further along on the VAX than the NIL project.  That sounds
odd to me.  I thought NIL had been released.  You might want to talk
with some of the folks at DEC.  The only one I know is Kalman Reti,
XCON.RETI@DEC-MARLBORO.
-------

∂21-Dec-81  1101	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Common Lisp      
Date: 21 Dec 1981 1355-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: Common Lisp   
To: RPG at SU-AI
In-Reply-To: Your message of 21-Dec-81 1323-EST

I am very happy to hear this.  we have used their compiler for Elisp,
as you may know, and have generally been following their work.  I
have been very impressed also, and would be very happy to see their
work get into something that is going to be more widely used them
Standard Lisp.
-------

∂21-Dec-81  1512	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Common Lisp
Date: 21 Dec 1981 1806-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Common Lisp
To: rpg at SU-AI, griss at UTAH-20

I just had a conversation with JonL which I found to be somewhat
unsettling.  I had hoped that Common Lisp was a sign that the Maclisp
community was willing to start doing a common development effort. It
begins to look like this is not the case.  It sounds to me like the most
we can hope for is a bunch of Lisps that will behave quite differently,
have completely different user facilities, but will have a common subset
of language facilities which will allow knowlegable users to write
transportable code, if they are careful.  I.e. it looks a lot like the
old Standard Lisp effort, wherein you tried to tweak existing
implementations to support the Standard Lisp primitives.  I thought more
or less everyone agreed that hadn't worked so well, which is why the new
efforts at Utah to do something really transportable.  I thought
everybody agreed that these days the way you did a Lisp was to write
some small kernel in an implementation language, and then have a lot of
Lisp code, and that the Lisp code would be shared.

Supposing that we and DEC do agree to proceed with Common Lisp, would
you be interested in starting a Common Lisp sub-conspiracy, i.e. a group
of people interested in a shared Common Lisp implementation?  While we
are going to have support from DEC, that support is going to be $70K
(including University overhead) which is going to be a drop in the
bucket if we have to do a whole system, rather than just a VM and some
tweaking.

-------

∂22-Dec-81  0811	Kim.fateman at Berkeley 	various: arithmetic;  commonlisp broadcasts  
Date: 22 Dec 1981 08:04:24-PST
From: Kim.fateman at Berkeley
To: guy.steele@cmu-10a
Subject: various: arithmetic;  commonlisp broadcasts
Cc: gjc@mit-mc, griss@utah-20, Kim.jkf@Berkeley, jonl@mit-mc, masinter@parc-maxc, rpg@su-ai

seem to include token representatives from berkeley (jkf) and utah (dm).
I think that including fateman@berkeley and griss@utah, too, would be nice.

I noticed in the the interlisp representative's report (the first to arrive
in "clear text" (not press format), that arithmetic needs are being
dictated in such a way as to be "as much as you would want for an
algebraic manipulation system such as Macsyma."   Since ratios and
complex numbers are not supported in the base Maclisp, I wonder why
they would be considered important to have in the base common lisp?

Personally, having the common lisp people dictate the results of
elementary functions, the semantics of bigfloat (what happened to
bigfloat? Is it gone?), single and double...
and such, seems overly ambitious and unnecessary.
No other language, even Fortran or ADA does much of this, and what it
does is usually not very good.

The true argument for including such stuff is NOT requirements of 
algebraic  manipulation stuff, but the prospect of doing
ARITHMETIC manipulation stuff with C.L.  Since only a few people are
familiar with Macsyma and Macsyma-like systems, requirements expressed
in the form "macsyma needs it"  seem unarguable.  But they are not...

∂22-Dec-81  0847	Griss at UTAH-20 (Martin.Griss) 	[Griss (Martin.Griss): Re: Common Lisp]   
Date: 22 Dec 1981 0944-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: [Griss (Martin.Griss): Re: Common Lisp]
To: rpg at SU-AI
cc: griss at UTAH-20

This is part of my response to Hedrick's last message. I guess I dont know
what JonL siad to him... I feel that I would be able to make more informed
decisions, and to interact more on Common LISP if I was on the mailing list.
I believe that PSL is pretty viable replacement for Standard LISP, and 
maybe have some kernel for CL. We are on a course now that really wants us to
finish our current "new-LISP" and to begin using it for applications in next
2-3 months (eg NSF and Boeing support). I think having association with CL
would help some funding efforts, maybe ARPA, Schlumberger, etc.

Perhaps we could talk on phone?
M
                ---------------

Date: 22 Dec 1981 0940-MST
From: Griss (Martin.Griss)
Subject: Re: Common Lisp
To: HEDRICK at RUTGERS
cc: Griss
In-Reply-To: Your message of 21-Dec-81 1606-MST

   Some more thoughts. Actually, I havent heard anything "official" about
decisions on CommonLISP. RPG visited here, and I think our concerns that CL
definition was too large (even larger than InterLISP VM), helped formulate
a Kernel+CL extension files.  Clearly that is what we are doing now in PSL,
building on relatively successful parts of Standard LISP, such as compiler,
etc. (SL worked well enough for us, just didnt have resources to do more
then).  I agree that JonL's comments as relayed by you sound much more
Anarchistic...

  I would really like to get involved in Common LISP, probably do VAX and
68000, since I guess you seem to be snapping up DEC-20 market. I currently
plan to continue with PSL on 20, VAX and 68000, since we are almost done
first round. VAX 90% complete and 68000 partially underway. In same sense
that SYSLISP could be basis for your 20 InterLISP, I think SYSLISP and some
of PSL could be transportable kernel for CL.

I need of course to find more funding, I cant cover out of my NSF effort,
since we are just about ready to start using PSL. Ill be teaching class
using PSL on DEC-20 and VAX (maybe even 68000?) this quarter, get soem
Algebra and Graphics projects underway. I will of course strive to be as CL
compatible as I can afford at this time.
-------
-------

∂23-Dec-81 1306	Guy.Steele at CMU-10A 	Re: various: arithmetic; commonlisp broadcasts 
Date: 23 December 1981 0025-EST (Wednesday)
From: Guy.Steele at CMU-10A
To: Kim.fateman at UCB-C70
Subject:  Re: various: arithmetic; commonlisp broadcasts
CC: gjc at MIT-MC, griss at utah-20, Kim.jkf at UCB-C70, jonl at MIT-MC,
    masinter at PARC-MAXC, rpg at SU-AI
In-Reply-To:  Kim.fateman@Berkeley's message of 22 Dec 81 11:06-EST
Message-Id: <23Dec81 002535 GS70@CMU-10A>

I sent the mail to the specified representatives of Berkeley and Utah
not because they were "token" but because they were the ones that had
actually contributed substantially to the discussion of outstanding issues.
I assumed that they would pass on the news.  I'll be glad to add you to
the mailing list if you really want that much more junk mail.

It should be noted that the InterLISP representative's report is just that:
the report of the InterLISP representative.  I think it is an excellent
report, but do not necessarily agree with all of its value judgements
and perspectives.  Therefore the motivations induced by vanMelle and
suggested in his report are not necessarily the true ones of the other
people involved.  I assume, however, that they accurately reflect vanMelle's
*perception* of people's motives, and as such are a valuable contribution
(because after all people may not understand their own motives well, or
may not realize how well or poorly they are communicating their ideas!).

You ask why Common LISP should support ratios and complex numbers, given
that MacLISP did not and yet MACSYMA got built anyway.  In response,
I rhetorically ask why MacLISP should have supported bignums, since
the PDP-10 does not?  Ratios were introduced primarily because they are
useful, they are natural for novices to use (at least as natural as
binary floating-point, with all its odd quirks, and with the advantage
of calculating exact results, such as (* 3 1/3) => 1, *always*), and
they solve problems with the quotient function.  Complex numbers were
motivated primarily by the S-1, which can handle complex floating-point
numbers and "Gaussian fixnums" primitively.  They need not be in Common
LISP, I suppose, but they are not much work to add.

The results of elementary functions are not being invented in a vacuum,
as you have several times insinuated, nor are the Common LISP implementors
going off and inventing some arbitrary new thing.  I have researched
the implementation, definition, and use of complex numbers in Algol 68,
PL/I, APL, and FORTRAN, and the real elementary functions in another
half-dozen languages.  The definitions of branch cuts and boundary cases,
which are in general not agreed on by any mathematicians at all (they tend
to define them *ad hoc* for the purpose at hand), are taken from a paper
by Paul Penfield for the APL community, in which he considers the problem
at length, weighs alternatives, and justifies his results according to
ten general principles, among which are consistency, keeping branch cuts
away from the positive real axis, preserving identities at boundaries,
and so on.  This paper has appeared in the APL '81 conference.  I agree that
mistakes have been made in other programming languages, but that does not
mean we should hide our heads in the sand.  A serious effort is being made
to learn from the past.  I think this effort is more substantial than will
be made by the dozens of Common LISP users who will have to write their
own trig functions if the language does not provide them.

Even if a mistake is made, it can be compensated for.  MACSYMA presently
has to compensate for MacLISP's ATAN function, whose range is 0 to 2*pi
(for most purposes -pi to pi is more appropriate, and certainly more
conventional).

[Could I inquire as to whether (FIX 1.0E20) still produces a smallish
negative number in Franz LISP?]

I could not agree more that all of this is relevant, not to *algebraic*
manipulation, but to *arithmetic* manipulation (although certainly the
presence of rational arithmetic will relieve MACSYMA of that particular
small burden).  But there is no good reason why LISP cannot become a
useful computational and well as symbolic language.  In particular,
certain kinds of AI work such as vision and speech research require
great amounts of numerical computation.  I know that you advocate
methods for linking FORTRAN or C programs to LISP for this purpose.
That is well and good, but I (for one) would like it also to be
practical to do it all in LISP if one so chooses.  LISP has already
expanded its horizons to support text editors and disk controllers;
why not also number-crunching?

--Guy

∂18-Dec-81  1533	Jon L. White <JONL at MIT-XX> 	Extended-addressing Common Lisp   
Date: 18 Dec 1981 1835-EST
From: Jon L. White <JONL at MIT-XX>
Subject: Extended-addressing Common Lisp
To: Hedrick at RUTGERS
cc: rpg at SU-AI

Sounds likea win for you to do it.  As far as I know, no one else
is going to do it (at least not now).  Probably some hints from
the NIL design would be good for you -- at one time the
file MC:NIL;VMACH >  gave a bunch of details about the
NIL "virtual machine".  Probably you should get in personal
touch with me (phone or otherwise) to chat about such "kernels".
-------

∂21-Dec-81  0717	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Common Lisp      
Date: 21 Dec 1981 1012-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: Common Lisp   
To: RPG at SU-AI
In-Reply-To: Your message of 20-Dec-81 2304-EST

thanks.  Are you sure Utah is producing Common Lisp?  they have a thing
they call Standard Lisp, which is something completely different.  I have
never heard of a Common Lisp project there, and I work very closely with
their Lisp development people so I think I would have.
-------

I visited there the middle of last month for about 3 days and talked
the technical side of Common Lisp being implemented in their style. Martin told
me that if we only insisted on a small virtual machine with most of the
rest in Lisp code from the Common Lisp people he'd like to do it.

I've been looking at their stuff pretty closely for the much behind schedule
Lisp evaluation thing and I'm pretty impressed with them. We discussed
grafting my S-1 Lisp compiler front end on top of their portable compiler.
			-rpg-
∂22-Dec-81  0827	Griss at UTAH-20 (Martin.Griss) 	Re: various: arithmetic;  commonlisp broadcasts
Date: 22 Dec 1981 0924-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Re: various: arithmetic;  commonlisp broadcasts
To: Kim.fateman at UCB-C70, guy.steele at CMU-10A
cc: gjc at MIT-MC, Kim.jkf at UCB-C70, jonl at MIT-MC, masinter at PARC-MAXC,
    rpg at SU-AI, Griss at UTAH-20
In-Reply-To: Your message of 22-Dec-81 0905-MST

I agree with Dick re being on the commonlisp mailing list. The PSL effort
is a more modest atempt at defining a transportable modern LISP, extending
Standard LISP with more powerful and efficient functions. I find no trace
of DM@utah-20 on our system, and have tried various aliases, still with
no luck.

Martin
-------

∂04-Jan-82  1754	Kim.fateman at Berkeley 	numbers in common lisp   
Date: 4 Jan 1982 17:54:03-PST
From: Kim.fateman at Berkeley
To: fahlman@cmu-10a, guy.steele@cmu-10a, moon@mit-ai, rpg@su-ai
Subject: numbers in common lisp
Cc: Kim.jkf@Berkeley, Kim.sklower@Berkeley


*** Issue 81: Complex numbers. Allow SQRT and LOG to produce results in
whatever form is necessary to deliver the mathematically defined result.

RJF:  This is problematical. The mathematically defined result is not
necessarily agreed upon.  Does Log(0) produce an error or a symbol?
(e.g. |log-of-zero| ?)  If a symbol, what happens when you try to
do arithmetic on it? Does sin(x) give up after some specified max x,
or continue to be a periodic function up to limit of machine range,
as on the HP 34?  Is accuracy specified in addition to precision?
Is it possible to specify rounding modes by flag setting or by
calling specific rounding-versions e.g. (plus-round-up x y) ? Such
features make it possible to implement interval arithmetic nicely.
Can one trap (signal, throw) on underflow, overflow,...
It would be a satisfying situation if common lisp, or at least a
superset of it, could exploit the IEEE standard. (Prof. Kahan would
much rather that language standardizers NOT delve too deeply into this,
leaving the semantics  (or "arithmetics") to specialists.)

Is it the case that a complex number could be implemented by
#C(x y) == (complex x y) ?  in which case  (real z) ==(cadr z),
(etc); Is a complex "atomic" in the lisp sense, or is it
the case that (eq (numerator #C(x y)) (numerator #C(x z)))?
Can one "rplac←numerator"?
If one is required to implement another type of atom for the
sake of rationals and another for complexes,
and another for ratios of complexes, then the
utility of this had better be substantial, and the implementation
cost modest.  In the case of x and y rational, there are a variety of
ways of representing x + i*y.  For example, it
is always possible to rationalize the denominator, but is it
required?
If  #R(1 2)  == (rat 1 2), is it the case that
(numerator r) ==(cadr r) ?  what is the numerator of (1/2+i)?

Even if you insist that all complex numbers are floats, not rationals,
you have multiple precisions to deal with.  Is it allowed to 
compute intermediate results to higher precision, or must one truncate
(or round) to some target precision in-between operations?

.......
Thus (SQRT -1.0) -> #C(0.0 1.0) and (LOG -1.0) -> #C(0.0 3.14159265).
Document all this carefully so that the user who doesn't care about
complex numbers isn't bothered too much.  As a rule, if you only play
with integers you won't see floating-point numbers, and if you only
play with non-complex numbers you won't see complex numbers.
.......
RJF: You've given 2 examples where, presumably, integers
are converted not only into floats, but into complex numbers. Your
rule does not seem to be a useful characterization. 
Note also that, for example, asin(1.5) is complex.

*** Issue 82: Branch cuts and boundary cases in mathematical
functions. Tentatively consider compatibility with APL on the subject of
branch cuts and boundary cases.
.......
RJF:Certainly gratuitous differences with APL, Fortran, PL/I etc are 
not a good idea!
.....

*** Issue 83: Fuzzy numerical comparisons. Have a new function FUZZY=
which takes three arguments: two numbers and a fuzz (relative
tolerance), which defaults in a way that depends on the precision of the
first two arguments.

.......
RJF: Why is this considered a language issue (in Lisp!), when the primary
language for numerical work (Fortran, not APL) does not?  The computation
of absolute and relative errors are sufficiently simple that not much
would be added by making this part of the language.)  I believe the fuzz business is used to cover
up the fact that some languages do not support integers. In such systems,
some computations  result in 1.99999 vs. 2.00000 comparisons, even though
both numbers are "integers". 

Incidentally, on "mod" of floats, I think that what you want is
like the "integer-part" of the IEEE proposal.  The EMOD instruction on 
the VAX is a brain-damaged attempt to do range-reductions.
.......

*** Issue 93: Complete set of trigonometric functions? Add ASIN, ACOS,
and TAN.


*** Issue 95: Hyperbolic functions. Add SINH, COSH, TANH, ASINH, ACOSH,
and ATANH.
.....
also useful are log(1+x) and exp(1+x).  


*** Issue 96: Are several versions of pi necessary? Eliminate the
variables SHORT-PI, SINGLE-PI, DOUBLE-PI, and LONG-PI, retaining only
PI.  Encourage the user to write such things as (SHORT-FLOAT PI),
(SINGLE-FLOAT (/ PI 2)), etc., when appropriate.
......
RJF: huh?  why not #(times 4 (atan 1.0)),  #(times 4 (atan 1.0d0)) etc.
It seems you are placing a burden on the implementors and discussants
of common lisp to write such trivial programs when the same thing
could be accomplished by a comment in the manual.

.......
.......
RJF: Sorry if the above comments sound overly argumentative.  I realize they
are in general not particularly constructive. 
I believe the group here at UCB will be making headway in many 
of the directions required as part of the IEEE support.

∂15-Jan-82  0850	Scott.Fahlman at CMU-10A 	Multiple Values    
Date: 15 January 1982 1124-EST (Friday)
From: Scott.Fahlman at CMU-10A
To: common-lisp at su-ai
Subject:  Multiple Values
CC: Scott.Fahlman at CMU-10A
Message-Id: <15Jan82 112415 SF50@CMU-10A>


I hate to rock the boat, but I would like to re-open one of the issues
supposedly settled at the November meeting, namely issue 55: whether to
go with the simple Lisp Machine style multiple-value receving forms, or
to go with the more complex forms in the Swiss Cheese Edition, which
provide full lambda-list syntax.

My suggestion was that we go with the simple forms and also provide the
Multiple-Value-Call construct, which more or less subsumes the
interesting uses for the Lambda-list forms.  The latter is quite easy
to implement, at least in Spice Lisp and I believe also in Lisp Machine
Lisp: you open the specified function call frame, evaluate the
arguments (which may return multiples) leaving all returned values on
the stack, then activate the call.  The normal argument-passing
machinery  (which is highly optimized) does all the lambda grovelling.
Furthermore, since this is only a very slight variation on a normal
function call, we should not be screwed in the future by unanticipated
interactions between this and, say, the declaration mechanism.

Much to my surprise, the group's decision was to go with all of the
above, but also to require that the lambda-hacking forms be supported.
This gives me real problems.  Given the M-V-CALL construct, I think
that these others are quite useless and likely to lead to many bad
interactions: this is now the only place where general lambda-lists have
to be grovelled outside of true function calls and defmacro.  I am not
willing to implement yet another variation on lambda-grovelling
just to include these silly forms, unless someone can show me that they
are more useful than I think they are.

The November vote might reflect the notion that M-V-LET and M-V-SETQ
can be implemented merely as special cases of M-V-CALL.  Note however,
that the bodies of the M-V-LET and M-V-SETQ forms are defined as
PROGNs, and will see a different set of local variables than they would
see if turned into a function to be called.  At least, that will be the
case unless Guy can come up with some way of hacking lexical closures
so as to make embedded lambdas see the lexical binding environment in
which they are defined.  Right now, for me at least, it is unclear
whether this can be done for all Common Lisp implementations with low
enough cost that we can make it a required feature.  In the meantime, I
think it is a real mistake to include in the language any constructs
that require a successful solution to this problem if they are to be
implemented decently.

So my vote (with the maximum number of exclamation points) continues to
be that Common Lisp should include only the Lisp Machine style forms,
plus M-V-CALL of multiple arguments.  Are the other forms really so
important to the rest of you?

All in all, I think that the amount of convergence made in the November
meeting was really remarkable, and that we are surprisingly close to
winning big on this effort.

-- Scott

∂15-Jan-82  0913	George J. Carrette <GJC at MIT-MC> 	multiple values.   
Date: 15 January 1982 12:14-EST
From: George J. Carrette <GJC at MIT-MC>
Subject: multiple values.
To: Scott.Fahlman at CMU-10A
cc: Common-lisp at SU-AI

[1] I think your last note has some incorrect assumptions about how
    the procedure call mechanism will work on future Lisp machines.
    Not that the assumption isn't reasonable, but as I recall the procedure
    ARGUMENT mechanism and the mechanism for passing the back
    the FIRST VALUE was designed to be inconsistent with the mechanism
    for passing the rest of the values. This puts a whole different
    perspective on the language semantics.
[2] At least one implementation, NIL, guessed that there would be
    demand in the future for various lambda extensions, so a
    sufficiently general lambda-grovelling mechanism was painlessly
    introduce from the begining.

∂15-Jan-82  2352	David A. Moon <Moon at MIT-MC> 	Multiple Values   
Date: Saturday, 16 January 1982, 02:36-EST
From: David A. Moon <Moon at MIT-MC>
Subject: Multiple Values
To: Scott.Fahlman at CMU-10A
Cc: common-lisp at su-ai

We are planning for implementation of the new multiple-value receiving
forms with &optional and &rest, on the L machine, but are unlikely to
be able to implement them on the present Lisp machine without a significant
amount of work.  I would just as soon see them flushed, but am willing
to implement them if the concensus is to keep them.

If by lambda-grovelling you mean (as GJC seems to think you mean) a
subroutine in the compiler that parses out the &optionals, that is about
0.5% of the work involved.  If by lambda-grovelling you mean the generated
code in a compiled function that takes some values and defaults the
unsupplied optionals, indeed that is where the hair comes in, since in
most implementations it can't be -quite- the same as the normal function-entry
case of what might seem to be the same thing.

∂16-Jan-82  0631	Scott.Fahlman at CMU-10A 	Re: Multiple Values
Date: 16 January 1982 0930-EST (Saturday)
From: Scott.Fahlman at CMU-10A
To: David A. Moon <Moon at MIT-MC> 
Subject:  Re: Multiple Values
CC: common-lisp at su-ai
In-Reply-To:  David A. Moon's message of 16 Jan 82 02:36-EST
Message-Id: <16Jan82 093009 SF50@CMU-10A>


As Moon surmises, my concern for "Lambda-grovelling" was indeed about
needing a second, slightly different version of the whole binding and
defaulting and rest-ifying machinery, not about the actual parsing of
the Lambda-list syntax which, as GJC points out, can be mostly put into
a universal function of its own.
-- Scott

∂16-Jan-82  0737	Daniel L. Weinreb <DLW at MIT-AI> 	Multiple Values
Date: Saturday, 16 January 1982, 10:22-EST
From: Daniel L. Weinreb <DLW at MIT-AI>
Subject: Multiple Values
To: Scott.Fahlman at CMU-10A, common-lisp at su-ai

What Moon says is true: I am writing a compiler, and parsing the
&-mumbles is quite easy compared to generating the code that implements
taking the returned values off of the stack and putting them where they
go while managing to run the default-forms and so on.  I could live
without the &-mumble forms of the receivers, although they seem like
they may be a good idea, and we are willing to implement them if they
appear in the Common Lisp definition.  I would not say that it is
generally an easy feature to implement.

It should be kept in mind that multiple-value-call certainly does not
provide the functionality of the &-mumble forms.  Only rarely do you
want to take all of the values produced by a function and pass them all
as successive arguments to a function.  Often they are some values
computed by the same piece of code, and you want to do completely
different things with each of them.

The goal of the &-mumble forms was to provide the same kind of
error-checking that we have with function calling.  Interlisp has no
such error-checking on function calls, which seems like a terrible thing
to me; the argument says that the same holds true of returned values.
I'm not convinced by that argument, but it has some merit.

∂16-Jan-82  1415	Richard M. Stallman <RMS at MIT-AI> 	Multiple Values   
Date: 16 January 1982 17:11-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: Multiple Values
To: Scott.Fahlman at CMU-10A
cc: common-lisp at SU-AI

I mostly agree with SEF.

Better than a separate function M-V-CALL would be a new option to the
function CALL that allows one or more of several arg-forms to be
treated a la M-V-CALL.  Then it is possible to have more than one arg
form, all of whose values become separate args, intermixed with lists
of evaluated args, and ordinary args; but it is not really any harder
to implement than M-V-CALL alone.

[Background note: the Lisp machine function CALL takes alternating
options and arg-forms.  Each option says how to pass the following
arg-form.  It is either a symbol or a list of symbols.  Symbols now
allowed are SPREAD and OPTIONAL.  SPREAD means pass the elements of
the value as args.  OPTIONAL means do not get an error if the function
being called doesn't want the args.  This proposal is to add VALUES as
an alternative to SPREAD, meaning pass all values of the arg form as
args.]

If the &-keyword multiple value forms are not going to be implemented
on the current Lisp machine, that is an additional reason to keep them
out of Common Lisp, given that they are not vitally necessary for
anything.

∂16-Jan-82  2033	Scott.Fahlman at CMU-10A 	Keyword sequence fns    
Date: 16 January 1982 2333-EST (Saturday)
From: Scott.Fahlman at CMU-10A
To: common-lisp at su-ai
Subject:  Keyword sequence fns
Message-Id: <16Jan82 233312 SF50@CMU-10A>


My proposal for keyword-style sequence functions can be found on CMUA as

TEMP:NEWSEQ.PRE[C380SF50]

or as

TEMP:NEWSEQ.DOC[C380SF50]

Fire away.
-- Scott

∂17-Jan-82  1756	Guy.Steele at CMU-10A 	Sequence functions    
Date: 17 January 1982 2056-EST (Sunday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Sequence functions
Message-Id: <17Jan82 205656 GS70@CMU-10A>

Here is an idea I would like to bounce off people.

The optional arguments given to the sequence functions are of two general
kinds: (1) specify subranges of the sequences to operate on; (2) specify
comparison predicates.  These choices tend to be completely orthogonal
in that it would appear equally likely to want to specify (1) without (2)
as to want to specify (2) without (1).  Therefore it is probably not
acceptable to choose a fixed ortder for them as simple optional arguments.

It is this problem that led me to propose the "functional-style" sequence
functions.  The minor claimed advantage was that the generated functions
might be useful as arguments to other functionals, particularly MAP.  The
primary motivation, however, was that this would syntactically allow
two distinct places for optional arguments, as:
   ((FREMOVE ...predicate optionals...) sequence ...subrange optionals...)

Here I propose to solve this problem in a different way, which is simply
to remove the subrange optionals entirely.  If you want to operate on a
subsequence, you have to use SUBSEQ to specify the subrange.  (Of course,
this won't work for the REPLACE function, which is in-place destructive.)
Given this, consistently reorganize the argument list so that the sequence
comes first.  This would give:
	(MEMBER SEQ #'EQL X)
	(MEMBER SEQ #'NUMBERP)
and so on.

Disadvantages:
(1) Unfamiliar argument order.
(2) Using SUBSEQ admittedlt is not as efficient as the subrange arguments
("but good a compiler could...").
(3) This doesn't allow you to elide EQL or EQUAL or whatever the chosen
default is.

Any takers?
--Guy




∂17-Jan-82  2207	Earl A. Killian <EAK at MIT-MC> 	Sequence functions    
Date: 17 January 1982 23:01-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  Sequence functions
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI

Using subseq instead of additional arguments is of course what
other languages do, and it is quite tasteful in those languages
because the creating a subsequence doesn't cons.  In Lisp it
does, which makes a lot of difference.  Unless you're willing to
GUARENTEE that the consing will be avoided, I don't think the
proposal is acceptable.  Consider a TECO style buffer management
that wanted to use string-replace to copy stuff around; it'd be
terrible if it consed the stuff it wanted to move!

∂18-Jan-82  0235	Richard M. Stallman <RMS at MIT-AI> 	subseq and consing
Date: 18 January 1982 05:25-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: subseq and consing
To: common-lisp at SU-AI

Even if SUBSEQ itself conses,
if you offer compiler optimizations which take expressions
where sequence functions are applied to calls to subseq
and turn them into calls to other internal functions which
take extra args and avoid consing, this is good enough
in efficiency and provides the same simplicity in user interface.

While on the subject, how about eliminating all the functions
to set this or that from the language description
(except a few for Maclisp compatibility) and making SETF
the only way to set anything?
The only use for the setting-functions themselves, as opposed
to SETF, is to pass to a functional--they are more efficient perhaps
than a user-written function that just uses SETF.  However, such
user-written functions that only use SETF can be made to expand
into the internal functions which exist to do the dirty work.
This change would greatly simplify the language.

∂18-Jan-82  0822	Don Morrison <Morrison at UTAH-20> 	Re: subseq and consing  
Date: 18 Jan 1982 0918-MST
From: Don Morrison <Morrison at UTAH-20>
Subject: Re: subseq and consing
To: RMS at MIT-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 18-Jan-82 0325-MST

And, after you've eliminated all the setting functions/forms, including
SETQ, change the name from SETF to SETQ.
-------

∂02-Jan-82  0908	Griss at UTAH-20 (Martin.Griss) 	Com L  
Date:  2 Jan 1982 1005-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Com L
To: guy.steele at CMU-10A, rpg at SU-AI
cc: griss at UTAH-20

I have retrieved the revisions and decisions, will look them over.
I will try to set up arrangements to be at POPL Mondat-Wednesday,
depends on flights,

What is Common LISP schedule, next meeting, etc? Will we be invited to
attend, or is this one of topics for us to dicuss, etc. at POPL.
What in fact are we to dicuss, and what should I be thinking about.
As I explained, I hope to finish this round of PSL implementation
on DEC-20, VAX and maybe even first version on 68000 by then.
We then will fill in some missing features, and start bringup up REDUCE,
meta-compiler, BIGfloats, and PictureRLISP graphics. At that point I
have accomplished a significant amount of my NSF goals this year.

Next step is to signficantly improve PSL, SYSLISP, merge with Mode Analysis
phase for improved LISP<->SYSLISP comunications and efficiency.

At the same time, we will be looking over various LISP systems to see what sort of good
features can be adapted, and what sort of compatibility packages (eg, UCI-LISP
package, FranzLISP package, etc).

Its certainly in this pahse that I could easily attempt to modify PSL to
provide a ComonLISP kernel, assuming that we have not already adapted much of the
code.
M
-------

∂14-Jan-82  0732	Griss at UTAH-20 (Martin.Griss) 	Common LISP 
Date: 14 Jan 1982 0829-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Common LISP
To: guy.steele at CMU-10A, rpg at SU-AI
cc: griss at UTAH-20

I just received amessage from Hedrick, regarding his project of doing an
extended Addressing common LISP on the DEC-20; it also refers to 
CMU doing the VAX version. I thought one of the possibilities we
were to discuss was whether we might become involved in doing the
VAX version? Is this true - ie what do you see as the possible routes
of joint work.?
Martin
-------

∂14-Jan-82  2032	Jonathan A. Rees <JAR at MIT-MC>   
Date: 14 January 1982 23:32-EST
From: Jonathan A. Rees <JAR at MIT-MC>
To: GLS at MIT-MC
cc: BROOKS at MIT-MC, RPG at MIT-MC

We've integrated your changes to the packing phase into our
code... we'll see pretty soon whether the new preferencing stuff works.
I've written a fancy new closure analysis phase which you might be
interested in snarfing at some point.  Much smarter than RABBIT about
SETQ'ed closed-over variables.
Using NODE-DISPATCH now.  Win.
I now have an ALIASP slot in the NODE structure, and the ALIAS-IF-SAFE
analysis has been moved into TARGETIZE-CALL-PRIMOP.  I'm debugging
that now.  This means the DEPENDENTS slot goes away.  I'm trying to
get e.g. (RPLACA X (FOO)) where X must be in a register (because
it's an RPLACA) and (FOO) is a call to an unknown function (and thus
clobbers all regs) to work fairly efficiently in all cases.
In fact I've rewritten a lot of TARGETIZE...

Does the <S1LISP.COMPILER> directory still exist?  I can't seem to read
it from FTP.  Has anyone done more work on S1COMP?

The T project, of course, is behind schedule.  As I told you before,
a toy interpreter runs on the Vax, but so far nothing besides
a read-factorial-print loop runs on the 68000.  But soon, I hope,...

∂15-Jan-82  0109	RPG   	Rutgers lisp development project 
 ∂14-Jan-82  1625	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Rutgers lisp development project    
Mail-from: ARPANET site RUTGERS rcvd at 13-Jan-82 2146-PST
Date: 14 Jan 1982 0044-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Rutgers lisp development project
To: bboard at RUTGERS, griss at UTAH-20, admin.mrc at SU-SCORE, jsol at RUTGERS
Remailed-date: 14 Jan 1982 1622-PST
Remailed-from: Mark Crispin
Remailed-to: Feigenbaum at SUMEX-AIM, REG at SU-AI

It now appears that we are going to do an implementation of Common Lisp
for the DEC-20.  This project is being funded by DEC.

		Why are we doing this project at all?

This project is being done because a number of our researchers are going
to want to be able to move their programs to other systems than the
DEC-20.  We are proposing to get personal machines over the next few
years.  Sri has already run into problem in trying to give AIMDS to
someone that only has a VAX.  Thus we think our users are going to want
to move to a dialect that is widely portable.

Also, newer dialects have some useful new features.  Although these
features can be put into Elisp, doing so will introduce
incompatibilities with old programs.  R/UCI Lisp already has too many
inconsistencies introduced by its long history.  It is probably better
to start with a dialect that has been designed in a coherent fashion.

			Why Common Lisp?

There are only three dialects of Lisp that are in wide use within the
U.S. on a variety of systems:  Interlisp, meta-Maclisp, and Standard
Lisp.  (By meta-Maclisp I mean a family of dialects that are all
related to Maclisp and generally share ideas.)  Of these, Standard Lisp
has a reputation of not being as "rich" a language, and in fact is not
taken seriously by most sites.  This is not entirely fair, but there is
probably nothing we can do about that fact at this stage. So we are left
with Interlisp and meta-Maclisp.  A number of implementors from the
Maclisp family have gotten together to define a common dialect that
combines the best features of their various dialects, while still being
reasonable in size.  A manual is being produced for it, and once
finished will remain reasonably stable.  (Can you believe it?
Documentation before coding!)  This dialect is now called Common Lisp.
The advantages of Common Lisp over Interlisp are:

  - outside of BBN and Xerox, the Lisp development efforts now going on
	all seem to be in the Maclisp family, and now are being
	redirected towards Common Lisp.  These efforts include 
	CMU, the Lisp Machine companies (Symbolics, LMI), LRL and MIT.

  - Interlisp has some features, particularly the spaghetti stack,
	that make it impossible to implement as efficiently and cleanly
	as Common Lisp.  (Note that it is possible to get as good
	effiency out of compiled code if you do not use these features,
	and if you use special techniques when compiling.  However that
	doesn't help the interpreter, and is not as clean.)

  - Because of these complexities in Interlisp, implementation is a
	large and complex job.  ARPA funded a fairly large effort at
	ISI, and even that seems to be marginal.  This comment is based
	on the report on the ISI project produced by Larry Masinter,
	<lisp>interlisp-vax-rpt.txt.  Our only hope would be to take
	the ISI implementation and attempt to transport it to the 20.
	I am concerned that the result of this would be extremely slow.
	I am also concerned that we might turn out not to have the
	resources necessary to do it a good job.

  - There seems to be a general feeling that Common Lisp will have a
	number of attractive features as a language.  (Notice that I am
	not talking about user facilities, which will no doubt take some
	time before they reach the level of Interlisp.)  Even people
	within Arpa are starting to talk about it as the language of the
	future.  I am not personally convinced that it is seriously
	superior to Interlisp, but it is as good (again, at the language
	level), and the general Maclisp community seems to have a number
	of ideas that are significantly in advance of what is likely to
	show up in Interlisp with the current support available for it.

There are two serious disadvantages of Common Lisp:

  - It does not exist yet.  As of this week, there now seem to be
	sufficient resources committed to it that we can be sure it will
	be implemented.  The following projects are now committed, at a
	level sufficient for success:  VAX (CMU), DEC-20 (Rutgers), PERQ
	and other related machines (CMU), Lisp Machine (Symbolics), S-1
	(LRL).  I believe this is sufficient to give the language a
	"critical mass".

  - It does not have user facilities defined for it.  CMU is heavily
	committed to the Spice (PERQ) implementation, and will produce
	the appropriate tools.  They appear to be funded sufficiently
	that this will happen.

		 Why is DEC funding it, and what will be
		 	our relationship with them?

LCG (the group within DEC that is responsible for the DEC-20) is
interested in increasing the software that will support the full 30-bit
address space possible in the DEC-20 architecture.  (Our current
processor will only use 23 bits of this, but this is still much better
than what was supported by the old software, which is 18 bits.)  They
are proceeding at a reasonable rate with the software that is supported
by DEC.  However they recognize that many important languages were
developed outside of DEC, and that it will not be practical for them
to develop large-address-space implementations of all of them in-house.
Thus DEC is attempting to find places that are working on the more
important of these languages, and they are funding efforts to develop
large address versions.  They are sponsoring us for Lisp, and Utah
for C.  Pascal is being done in a slightly complex fashion.  (In fact
some of our support from DEC is for Pascal.)

DEC does not expect to make money directly from these projects.  We will
maintain control over the software we develop, and could sell support
for it if we wanted to. We are, of course, expected to make the software
widely available. (Most likely we will submit it to DECUS but also
distribute it ourselves.)  What DEC gets out of it is that the large
address space DEC-20 will have a larger variety of software available
for it than otherwise.  I believe this will be an important point for
them in the long run, since no one is going to want to buy a machine for
which only the Fortran compiler can generate programs larger than 256K.
Thus they are facing the following facts:
  - they can't do things in house nearly as cheaply as universities
	can do them.
  - universities are no longer being as well funded to do language
	development, particularly not for the DEC-20.

			How will we go about it?

We have sufficient funding for one full-time person and one RA.  Both
DEC and Rutgers are very slow about paperwork.  But these people should
be in place sometime early this semester.  The implementation will
involve a small kernel, in assembly language, with the rest done in
Lisp.  We will get the Lisp code from CMU, and so will only have to do
the kernel.  This project seems to be the same size as the Elisp
project, which was done within a year using my spare time and a month of
so of Josh's time.  It seems clear that we have sufficient manpower. (If
you think maybe we have too much, I can only say that if we finish the
kernel sooner than planned, we will spend the time working on user
facilities, documentation, and helping users here convert to it.) CMU
plans to finish the VAX project in a year, with a preliminary version in
6 months and a polished release in a year.  Our target is similar.
-------

∂15-Jan-82  0850	Scott.Fahlman at CMU-10A 	Multiple Values    
Date: 15 January 1982 1124-EST (Friday)
From: Scott.Fahlman at CMU-10A
To: common-lisp at su-ai
Subject:  Multiple Values
CC: Scott.Fahlman at CMU-10A
Message-Id: <15Jan82 112415 SF50@CMU-10A>


I hate to rock the boat, but I would like to re-open one of the issues
supposedly settled at the November meeting, namely issue 55: whether to
go with the simple Lisp Machine style multiple-value receving forms, or
to go with the more complex forms in the Swiss Cheese Edition, which
provide full lambda-list syntax.

My suggestion was that we go with the simple forms and also provide the
Multiple-Value-Call construct, which more or less subsumes the
interesting uses for the Lambda-list forms.  The latter is quite easy
to implement, at least in Spice Lisp and I believe also in Lisp Machine
Lisp: you open the specified function call frame, evaluate the
arguments (which may return multiples) leaving all returned values on
the stack, then activate the call.  The normal argument-passing
machinery  (which is highly optimized) does all the lambda grovelling.
Furthermore, since this is only a very slight variation on a normal
function call, we should not be screwed in the future by unanticipated
interactions between this and, say, the declaration mechanism.

Much to my surprise, the group's decision was to go with all of the
above, but also to require that the lambda-hacking forms be supported.
This gives me real problems.  Given the M-V-CALL construct, I think
that these others are quite useless and likely to lead to many bad
interactions: this is now the only place where general lambda-lists have
to be grovelled outside of true function calls and defmacro.  I am not
willing to implement yet another variation on lambda-grovelling
just to include these silly forms, unless someone can show me that they
are more useful than I think they are.

The November vote might reflect the notion that M-V-LET and M-V-SETQ
can be implemented merely as special cases of M-V-CALL.  Note however,
that the bodies of the M-V-LET and M-V-SETQ forms are defined as
PROGNs, and will see a different set of local variables than they would
see if turned into a function to be called.  At least, that will be the
case unless Guy can come up with some way of hacking lexical closures
so as to make embedded lambdas see the lexical binding environment in
which they are defined.  Right now, for me at least, it is unclear
whether this can be done for all Common Lisp implementations with low
enough cost that we can make it a required feature.  In the meantime, I
think it is a real mistake to include in the language any constructs
that require a successful solution to this problem if they are to be
implemented decently.

So my vote (with the maximum number of exclamation points) continues to
be that Common Lisp should include only the Lisp Machine style forms,
plus M-V-CALL of multiple arguments.  Are the other forms really so
important to the rest of you?

All in all, I think that the amount of convergence made in the November
meeting was really remarkable, and that we are surprisingly close to
winning big on this effort.

-- Scott

∂15-Jan-82  0913	George J. Carrette <GJC at MIT-MC> 	multiple values.   
Date: 15 January 1982 12:14-EST
From: George J. Carrette <GJC at MIT-MC>
Subject: multiple values.
To: Scott.Fahlman at CMU-10A
cc: Common-lisp at SU-AI

[1] I think your last note has some incorrect assumptions about how
    the procedure call mechanism will work on future Lisp machines.
    Not that the assumption isn't reasonable, but as I recall the procedure
    ARGUMENT mechanism and the mechanism for passing the back
    the FIRST VALUE was designed to be inconsistent with the mechanism
    for passing the rest of the values. This puts a whole different
    perspective on the language semantics.
[2] At least one implementation, NIL, guessed that there would be
    demand in the future for various lambda extensions, so a
    sufficiently general lambda-grovelling mechanism was painlessly
    introduce from the begining.

∂15-Jan-82  2352	David A. Moon <Moon at MIT-MC> 	Multiple Values   
Date: Saturday, 16 January 1982, 02:36-EST
From: David A. Moon <Moon at MIT-MC>
Subject: Multiple Values
To: Scott.Fahlman at CMU-10A
Cc: common-lisp at su-ai

We are planning for implementation of the new multiple-value receiving
forms with &optional and &rest, on the L machine, but are unlikely to
be able to implement them on the present Lisp machine without a significant
amount of work.  I would just as soon see them flushed, but am willing
to implement them if the concensus is to keep them.

If by lambda-grovelling you mean (as GJC seems to think you mean) a
subroutine in the compiler that parses out the &optionals, that is about
0.5% of the work involved.  If by lambda-grovelling you mean the generated
code in a compiled function that takes some values and defaults the
unsupplied optionals, indeed that is where the hair comes in, since in
most implementations it can't be -quite- the same as the normal function-entry
case of what might seem to be the same thing.

∂16-Jan-82  0631	Scott.Fahlman at CMU-10A 	Re: Multiple Values
Date: 16 January 1982 0930-EST (Saturday)
From: Scott.Fahlman at CMU-10A
To: David A. Moon <Moon at MIT-MC> 
Subject:  Re: Multiple Values
CC: common-lisp at su-ai
In-Reply-To:  David A. Moon's message of 16 Jan 82 02:36-EST
Message-Id: <16Jan82 093009 SF50@CMU-10A>


As Moon surmises, my concern for "Lambda-grovelling" was indeed about
needing a second, slightly different version of the whole binding and
defaulting and rest-ifying machinery, not about the actual parsing of
the Lambda-list syntax which, as GJC points out, can be mostly put into
a universal function of its own.
-- Scott

∂16-Jan-82  0737	Daniel L. Weinreb <DLW at MIT-AI> 	Multiple Values
Date: Saturday, 16 January 1982, 10:22-EST
From: Daniel L. Weinreb <DLW at MIT-AI>
Subject: Multiple Values
To: Scott.Fahlman at CMU-10A, common-lisp at su-ai

What Moon says is true: I am writing a compiler, and parsing the
&-mumbles is quite easy compared to generating the code that implements
taking the returned values off of the stack and putting them where they
go while managing to run the default-forms and so on.  I could live
without the &-mumble forms of the receivers, although they seem like
they may be a good idea, and we are willing to implement them if they
appear in the Common Lisp definition.  I would not say that it is
generally an easy feature to implement.

It should be kept in mind that multiple-value-call certainly does not
provide the functionality of the &-mumble forms.  Only rarely do you
want to take all of the values produced by a function and pass them all
as successive arguments to a function.  Often they are some values
computed by the same piece of code, and you want to do completely
different things with each of them.

The goal of the &-mumble forms was to provide the same kind of
error-checking that we have with function calling.  Interlisp has no
such error-checking on function calls, which seems like a terrible thing
to me; the argument says that the same holds true of returned values.
I'm not convinced by that argument, but it has some merit.

∂16-Jan-82  1252	Griss at UTAH-20 (Martin.Griss) 	Kernel for Commaon LISP    
Date: 16 Jan 1982 1347-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Kernel for Commaon LISP
To: guy.steel at CMU-10A, rpg at SU-AI
cc: griss at UTAH-20

What was actually decided about a "small" common kernel, rest
in LISP. Were core functions identified? This first place that
my work and expertise will strongly overlap; the smaller the
kernel, and the more jazzy-features that can be imnpleemnted
in terms of it, the better. 

Have you sent out a revised Ballot, or are there pending questions that
the "world-at-large" should respond to (as apposed to the ongoing
group that has been making decisions). The last bit about the
lambda stuff for multiples is pretty obscure, seems to depend on
a model that was discussed, but not documented (as far as I can see).

In general, where are the proposed to solutions to the hard implementation
issues being recorded.
Martin
-------

∂16-Jan-82  1415	Richard M. Stallman <RMS at MIT-AI> 	Multiple Values   
Date: 16 January 1982 17:11-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: Multiple Values
To: Scott.Fahlman at CMU-10A
cc: common-lisp at SU-AI

I mostly agree with SEF.

Better than a separate function M-V-CALL would be a new option to the
function CALL that allows one or more of several arg-forms to be
treated a la M-V-CALL.  Then it is possible to have more than one arg
form, all of whose values become separate args, intermixed with lists
of evaluated args, and ordinary args; but it is not really any harder
to implement than M-V-CALL alone.

[Background note: the Lisp machine function CALL takes alternating
options and arg-forms.  Each option says how to pass the following
arg-form.  It is either a symbol or a list of symbols.  Symbols now
allowed are SPREAD and OPTIONAL.  SPREAD means pass the elements of
the value as args.  OPTIONAL means do not get an error if the function
being called doesn't want the args.  This proposal is to add VALUES as
an alternative to SPREAD, meaning pass all values of the arg form as
args.]

If the &-keyword multiple value forms are not going to be implemented
on the current Lisp machine, that is an additional reason to keep them
out of Common Lisp, given that they are not vitally necessary for
anything.

∂16-Jan-82  2033	Scott.Fahlman at CMU-10A 	Keyword sequence fns    
Date: 16 January 1982 2333-EST (Saturday)
From: Scott.Fahlman at CMU-10A
To: common-lisp at su-ai
Subject:  Keyword sequence fns
Message-Id: <16Jan82 233312 SF50@CMU-10A>


My proposal for keyword-style sequence functions can be found on CMUA as

TEMP:NEWSEQ.PRE[C380SF50]

or as

TEMP:NEWSEQ.DOC[C380SF50]

Fire away.
-- Scott

∂17-Jan-82  0618	Griss at UTAH-20 (Martin.Griss) 	Agenda 
Date: 17 Jan 1982 0714-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Agenda
To: guy.Steele at CMU-10A, rpg at SU-AI
cc: griss at UTAH-20

Still havent any indication from you guys as to what we should be dicussing;
ie what should I be thinking about as our possible mode of intercation with
the common Lispers?
M
-------

I had been deferring to GLS on this by silence, but let me tell you my thoughts
on the current situation.

First, the DEC/Rutgers things took me somewhat by surprise. I know that Hedrick
thinks very highly of the Standard Lisp stuff, and I wouldn't mind seeing
a joint effort from the Common Lisp core people, Dec/Rutgers, and Utah.

From the Utah connection I would like to see a clean looking virtual machine,
a set of Lisp code to implement the fluff from Common Lisp, and a reasonable
portable type of compiler.

By `connection' I mean Utah providing the virtual machine for a few specific
computers, Common Lisp core people providing most of the Lisp code, and
maybe S-1 and Utah providing the compiler.

Even with Dec/Rutgers doing the Vax/20 versions, Utah provides us with
the expertise to do many other important, but bizarre machines, such as
68k based machines, IBM equipment, and Burroughs, to name a few. Perhaps
Rutgers/DEC wouldn't mind working with us all on this.

That is what I would to discuss for political topics.

For technical topics, the virtual machine specification and the compiler
technology.

			-rpg-
∂17-Jan-82  1751	Feigenbaum at SUMEX-AIM 	more on Interlisp-VAX    
Date: 17 Jan 1982 1744-PST
From: Feigenbaum at SUMEX-AIM
Subject: more on Interlisp-VAX
To:   rindfleisch at SUMEX-AIM, barstow at SUMEX-AIM, bonnet at SUMEX-AIM,
      hart at SRI-KL, csd.hbrown at SU-SCORE
cc:   csd.genesereth at SU-SCORE, buchanan at SUMEX-AIM, lenat at SUMEX-AIM,
      friedland at SUMEX-AIM, pople at SUMEX-AIM, gabriel at SU-AI

Mail-from: ARPANET host USC-ISIB rcvd at 17-Jan-82 1647-PST
Date: 17 Jan 1982 1649-PST
From: Dave Dyer       <DDYER at USC-ISIB>
Subject: Interlisp-VAX report
To: feigenbaum at SUMEX-AIM, lynch at USC-ISIB, balzer at USC-ISIB,
    bengelmore at SRI-KL, nilsson at SRI-AI
cc: rbates at USC-ISIB, saunders at USC-ISIB, voreck at USC-ISIB, mcgreal at USC-ISIB,
    ignatowski at USC-ISIB, hedrick at RUTGERS, admin.mrc at SU-SCORE,
    jsol at RUTGERS, griss at UTAH-20, bboard at RUTGERS, reg at SU-AI

	Addendum to Interlisp-VAX: A report

		Jan 16, 1982


  Since Larry Masinter's "Interlisp-VAX: A Report" is being
used in the battle of LISPs, it is important that it be as
accurate as possible.  This note represents the viewpoint of
the implementors of Interlisp-VAX, as of January 1982.

  The review or the project, and the discussions with other
LISP implementors, that provided the basis for "Interlisp-VAX:
A report", were done in June 1981.  We were given the opportunity
to review and respond to a draft of the report, and had few
objections that were refutable at the time of its writing.

  We now have the advantage of an additional 6 month's development
effort, and can present as facts what would have been merely
counter arguments at the time.


  We believed at the time, and still believe now, that Masinter's
report is largely a fair and accurate presentation of Interlisp-VAX,
and of the long term efforts necesary to support it.  However,
a few very important points he made have proven to be inaccurate.


AVAILABILITY AND FUNCTINALITY
-----------------------------

  Interlisp-VAX has been in beta test, here at ISI and at several
sites around the network, since November 13 (a friday - we weren't worried).
We are planning the first general release for February 1982 - ahead
of the schedule that was in effect in June, 1981.

  The current implementation incudes all of the features of Interlisp-10
with very minor exceptions.  There is no noticable gap in functionality
among Interlisp-10, Interlisp-D and Interlisp-VAX.

   Among the Interlisp systems we are running here are KLONE, AP3,
HEARSAY, and AFFIRM.

PERFORMANCE
-----------

   Masinter's analysis of the problems of maximizing performance,
both for Interlisp generally and for the VAX particularly was excellent.
It is now reasonable to quantify the performance based on experiance
with real systems.   I don't want to descend into the quagmire of
benchmarking LISPs here, so I'll limit my statements to the most basic.

  CPU speed (on a vax/780) is currently in the range of 1/4 the speed
of Interlisp-10 (on a KL-10), which we believe is about half the 
asymptoticaly acheivalbe speed.

   Our rule of thumb for real memory is 1 mb. per active user.


-------

∂17-Jan-82  1756	Guy.Steele at CMU-10A 	Sequence functions    
Date: 17 January 1982 2056-EST (Sunday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Sequence functions
Message-Id: <17Jan82 205656 GS70@CMU-10A>

Here is an idea I would like to bounce off people.

The optional arguments given to the sequence functions are of two general
kinds: (1) specify subranges of the sequences to operate on; (2) specify
comparison predicates.  These choices tend to be completely orthogonal
in that it would appear equally likely to want to specify (1) without (2)
as to want to specify (2) without (1).  Therefore it is probably not
acceptable to choose a fixed ortder for them as simple optional arguments.

It is this problem that led me to propose the "functional-style" sequence
functions.  The minor claimed advantage was that the generated functions
might be useful as arguments to other functionals, particularly MAP.  The
primary motivation, however, was that this would syntactically allow
two distinct places for optional arguments, as:
   ((FREMOVE ...predicate optionals...) sequence ...subrange optionals...)

Here I propose to solve this problem in a different way, which is simply
to remove the subrange optionals entirely.  If you want to operate on a
subsequence, you have to use SUBSEQ to specify the subrange.  (Of course,
this won't work for the REPLACE function, which is in-place destructive.)
Given this, consistently reorganize the argument list so that the sequence
comes first.  This would give:
	(MEMBER SEQ #'EQL X)
	(MEMBER SEQ #'NUMBERP)
and so on.

Disadvantages:
(1) Unfamiliar argument order.
(2) Using SUBSEQ admittedlt is not as efficient as the subrange arguments
("but good a compiler could...").
(3) This doesn't allow you to elide EQL or EQUAL or whatever the chosen
default is.

Any takers?
--Guy




∂17-Jan-82  2042	Earl A. Killian <EAK at MIT-MC> 	Sequence functions    
Date: 17 January 1982 23:01-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  Sequence functions
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI

Using subseq instead of additional arguments is of course what
other languages do, and it is quite tasteful in those languages
because the creating a subsequence doesn't cons.  In Lisp it
does, which makes a lot of difference.  Unless you're willing to
GUARENTEE that the consing will be avoided, I don't think the
proposal is acceptable.  Consider a TECO style buffer management
that wanted to use string-replace to copy stuff around; it'd be
terrible if it consed the stuff it wanted to move!

∂18-Jan-82  0235	Richard M. Stallman <RMS at MIT-AI> 	subseq and consing
Date: 18 January 1982 05:25-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: subseq and consing
To: common-lisp at SU-AI

Even if SUBSEQ itself conses,
if you offer compiler optimizations which take expressions
where sequence functions are applied to calls to subseq
and turn them into calls to other internal functions which
take extra args and avoid consing, this is good enough
in efficiency and provides the same simplicity in user interface.

While on the subject, how about eliminating all the functions
to set this or that from the language description
(except a few for Maclisp compatibility) and making SETF
the only way to set anything?
The only use for the setting-functions themselves, as opposed
to SETF, is to pass to a functional--they are more efficient perhaps
than a user-written function that just uses SETF.  However, such
user-written functions that only use SETF can be made to expand
into the internal functions which exist to do the dirty work.
This change would greatly simplify the language.

∂18-Jan-82  0822	Don Morrison <Morrison at UTAH-20> 	Re: subseq and consing  
Date: 18 Jan 1982 0918-MST
From: Don Morrison <Morrison at UTAH-20>
Subject: Re: subseq and consing
To: RMS at MIT-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 18-Jan-82 0325-MST

And, after you've eliminated all the setting functions/forms, including
SETQ, change the name from SETF to SETQ.
-------

∂18-Jan-82  1602	Daniel L. Weinreb <DLW at MIT-AI> 	subseq and consing  
Date: Monday, 18 January 1982, 18:04-EST
From: Daniel L. Weinreb <DLW at MIT-AI>
Subject: subseq and consing
To: common-lisp at SU-AI

I agree that GLS's proposal is nice, that it is only acceptable if the
compiler optimizes it, and that it is very easy to optimize.  It is also
extremely clear to the reader of the program, and it cuts down on the
number of arguments that he has to remember.  This sounds OK to me.

∂18-Jan-82  2203	Scott.Fahlman at CMU-10A 	Re: Sequence functions  
Date: 19 January 1982 0103-EST (Tuesday)
From: Scott.Fahlman at CMU-10A
To: Guy.Steele at CMU-10A
Subject:  Re: Sequence functions
CC: common-lisp at su-ai
In-Reply-To:  <17Jan82 205656 GS70@CMU-10A>
Message-Id: <19Jan82 010338 SF50@CMU-10A>


Guy,

I agree that the index-range and the comparison-choice parameters are
orthogonal.  I like your proposal to use SUBSEQ for the ranges -- it
would appear to be no harder to optimize this in the compiler than to
do the equivalent keyword or optional argument thing, and the added
consing in interpreted code (only!)  should not make much difference.
And the semantics of what is going on with all the start and end
options now becomes crystal clear.  We would need a style suggestion in
the manual urging the programmer to use SUBSEQ for this and not some
random thing he cooks up, since the compiler will only recognize fairly
obvious cases.  Good idea!

I do not like the part of your proposal that relates to reordering the
arguments, on the grounds of gross incompatibility.  Unless we want to
come up with totally new names for all these functions, the change will
make it a real pain to move code and programmers over from Maclisp or
Franz.  Too high a price to pay for epsilon increase in elegance.  I
guess that of the suggestions I've seen so far, I would go with your
subseq idea for ranges and my keywords for specifying the comparison,
throwing out the IF family.

-- Scott

∂19-Jan-82  1551	RPG  	Suggestion    
To:   common-lisp at SU-AI  
I would like to make the following suggestion regarding the
strategy for designing Common Lisp. I'm not sure how to exactly
implement the strategy, but I think it is imperative we do something
like this soon.

We should separate the kernel from the Lisp based portions of the system
and design the kernel first. Lambda-grovelling, multiple values,
and basic data structures seem kernel. Sequence functions and names
can be done later.

The reason that we should do this is so that the many man-years of effort
to immplement a Common Lisp can be done in parallel with the design of
less critical things. 
			-rpg-

∂19-Jan-82  2113	Griss at UTAH-20 (Martin.Griss) 	Re: Suggestion        
Date: 19 Jan 1982 1832-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Re: Suggestion    
To: RPG at SU-AI, common-lisp at SU-AI
cc: Griss at UTAH-20
In-Reply-To: Your message of 19-Jan-82 1651-MST

I agree entirely. In terms of my 2 interests:
a) Implementing Common LISP kernel/compatibility in/for PSL
b) Getting our and other LISP tools working for Common LISP

I would very much like to see a clear effort NOW to isolate some of the
kernel features, and major implementation issues (data-types, user
control over storage manager, etc) so that some of us can implement
a kernel, and others can design extensions.
-------

∂20-Jan-82  1604	David A. Moon <MOON5 at MIT-AI> 	Keyword style sequence functions
Date: 20 January 1982 16:34-EST
From: David A. Moon <MOON5 at MIT-AI>
Subject: Keyword style sequence functions
To: common-lisp at SU-AI

Comments on Fahlman's Proposal for Keyword Style Sequence Functions for
Common Lisp of 16 January 1982

I think this is a good proposal and a step in the right direction.  There
are some problems with it, and also a couple issues that come to mind while
reading it.  I will first make some minor comments to get flamed up, and
then say what I really want to say.


- Negative comments first:

ELT and SETELT should be provided in type-specific versions.

My intuition suggests that MAP would be more readable with the result data
type before the function instead of after.  I don't really feel strongly
about this, but it's a suggestion.

I don't like the idea of flushing CONCAT (catenate) and making TO-LIST
allow multiple arguments, for some reason.

There is a problem with the :compare and :compare-not keywords.  For some
functions (the ones that take two sequences as arguments), the predicate is
really and truly an equality test.  It might be clearer to call it :equal.
For these functions I think it makes little sense to have a :compare-not.
Note that these are the same functions for which :if/:if-not are meaningless.
For other functions, such as POSITION, the predicate may not be a symmetric
equality predicate; you might be trying to find the first number in a list
greater than 50, or the number of astronauts whose grandmothers are not
ethnic Russians.  Here it makes sense to have a :compare-not.  It may actually
make sense to have a :compare keyword for these functions and a :equal
keyword for the others.  I'm not ecstatic about the name compare for this,
but I haven't thought of anything better.  This is only a minor esthetic
issue; I wouldn't really mind leaving things the way they are in Fahlman's
proposal.

Re :start and :end.  A nil value for either of these keywords should be
the same as not supplying it (i.e. the appropriate boundary of the sequence.)
This makes a lot of things simpler.  In :from-end mode, is the :start where
you start processing the sequence or the left-hand end of the subsequence?
In the Lisp machine, it is the latter, but either way would be acceptable.

The optional "count" argument to REMOVE and friends should be a keyword
argument.  This is more uniform, doesn't hurt anything, and is trivially
mechanically translatable from the old way.

The set functions, from ADJOIN through NSET-XOR, should not take keywords.
:compare-not is meaningless for these (unlike say position, where you would
use it to find the first element of a sequence that differed from a given
value).  That leaves only one keyword for these functions.  Also it is
-really- a bad idea to put keywords after an &rest argument (as in UNION).
I would suggest that the equal-predicate be a required first argument for
all the set functions; alternatively it could be an optional third argument
except for UNION and INTERSECTION, or those functions could be changed
to operate on only two sets like the others.  I think EQUAL is likely
to be the right predicate for set membership only in rare circumstances,
so that it would not hurt to make the predicate a required argument and
have no default predicate.

The :eq, :eql, :nequal, etc. keywords are really a bad idea.  The reasons
are:  1) They are non-uniform, with some keywords taking arguments and
some not.  See the tirade about this below.  2) They introduce an artificial
barrier between system-defined and user-defined predicates.  This is always
a bad idea, and here serves no useful purpose.  3) They introduce an
unesthetic interchangeability between foo and :foo, which can lead to
a significant amount of confusion.  If the keyword form of specifying the
predicate is too verbose, I would be much happier with making the predicate
be an optional argument, to be followed by keywords.  Personally I don't
think it is verbose enough to justify that.

There are still a lot of string functions in the Lisp machine not generalized
into sequence functions.  I guess it is best to leave that issue for future
generations and get on with the initial specification of Common Lisp.


- Negative comments not really related to the issue at hand:

"(the :string foo)".  Data type names cannot have colons, i.e. cannot be
keywords.  The reason is that the data type system is user-extensible, at
least via defstruct and certainly via other mechanisms such as flavors in
individual implementations and in future Common extensions.  This means
that it is important to be able to use the package system to avoid name
clashes between data types defined by different programs.  The standard
primitive data type names should be globals (or more exactly, should be
in the same package as the standard primitive functions that operate
on those data types.)

Lisp machine experience suggests that it is really not a good idea to have
some keywords take arguments and other keywords not take arguments.  It's a
bit difficult to explain why.  When you are just using these functions with
their keywords in a syntactic way, i.e. essentially as special forms, it
makes no difference except insofar as it makes the documentation more
confusing.  But when you start having programs processing the keywords,
i.e. using the sequence functions as functions rather than special forms,
all hell breaks loose if the syntax isn't uniform.  I think the slight
ugliness of an extra "t" sometimes is well worth it for the sake of
uniformity and simplicity.  On the Lisp machine, we've gone through an
evolution in the last couple of years in which keywords that don't take
arguments have been weeded out.

I don't think much of the scheme for having keywords be constants.  There
is nothing really bad about this except for the danger of confusing
novices, so I guess I could be talked into it, but I don't think getting
rid of the quote mark is a significant improvement (but perhaps it is in
some funny place on your keyboard, where you can't find it, rather than
lower case and to the right of the semicolon as is standard for
typewriters?)


- Minor positive comments

Making REPLACE take keywords is a good idea.

:start1/:end1/:start2/:end2 is a good idea.

The order of arguments to the compare/compare-not function needs to be
strictly defined (since it is not always a commutative function).  Presumably 
the right thing is to make its arguments come in the same order as the
arguments to the sequence function from which they derive.  Thus for SEARCH
the arguments would be an element of sequence1 followed by an element of
sequence2, while for POSITION the arguments would be the item followed
by an element of the sequence.

In addition to MEMQ, etc., would it be appropriate to have MEMQL, etc.,
which would use EQL as the comparison predicate?

MEMBER is a better name than POSITION for the predicate that tests for
membership of an element in a sequence, when you don't care about its
position and really want simply a predicate.  I am tempted to propose that
MEMBER be extended to sequences.  Of course, this would be a non-uniform
extension, since the true value would be T rather than a tail of a list (in
other words, MEMBER would be a predicate on sequences but a semi-predicate
on lists.)  This might be a nasty for novices, but it really seems worth
risking that.  Fortunately car, cdr, rplaca, and rplacd of T are errors in
any reasonable implementation, so that accidentally thinking that the truth
value is a list is likely to be caught immediately.


- To get down to the point:

The problems remaining after this proposal are basically two.  One is that there
is still a ridiculous family of "assoc" functions, and the other is that the
three proposed solutions to the -if/-if-not problem (flushing it, having an
optional argument before a required argument, or passing nil as a placeholder)
are all completely unacceptable.

My solution to the first problem is somewhat radical: remove ASSOC and all
its relatives from the language entirely.  Instead, add a new keyword,
:KEY, to the sequence functions.  The argument to :KEY is the function
which is given an element of the sequence and returns its "key", the object
to be fed to the comparison predicate.  :KEY would be accepted by REMOVE,
POSITION, COUNT, MEMBER, and DELETE.  This is the same as the new optional
argument to SORT (and presumably MERGE), which replaced SORTCAR and
SORTSLOT; but I guess we don't want to make those take keywords.  It is
also necessary to add a new sequence function, FIND, which takes arguments
like POSITION but returns the element it finds.  With a :compare of EQ and
no :key, FIND is (almost) trivial, but with other comparisons and/or other
keys, it becomes extremely useful.

The default value for :KEY would be #'ID or IBID or CR, whatever we call
the function that simply returns its argument [I don't like any of those
names much.]  Using #'CAR as the argument gives you ASSOC (from FIND),
MEMASSOC (from MEMBER), POSASSOC (from POSITION), and DELASSOC (from
DELETE).  Using #'CDR as the argument gives you the RASS- forms.  Of
course, usually you don't want to use either CAR or CDR as the key, but
some defstruct structure-element-accessor.

In the same way that it may be reasonable to keep MEMQ for historical
reasons and because it is used so often, it is probably good to keep
ASSQ and ASSOC.  But the other a-list searching functions are unnecessary.

My solution to the second problem is to put in separate functions for
the -if and -if-not case.  In fact this is a total of only 10 functions:

	remove-if	remove-if-not	position-if	position-if-not
	count-if	count-if-not	delete-if	delete-if-not
	find-if		find-if-not

MEMBER-IF and MEMBER-IF-NOT are identical to SOME and NOTEVERY if the above
suggestion about extending MEMBER to sequences is adopted, and if my memory
of SOME and NOTEVERY is correct (I don't have a Common Lisp manual here.)
If they are put in anyway, that still makes only 12 functions, which are
really only 6 entries in the manual since -if/-if-not pairs would be
documented together.

∂20-Jan-82  1631	Kim.fateman at Berkeley 	numerics and common-lisp 
Date: 20 Jan 1982 16:29:10-PST
From: Kim.fateman at Berkeley
To: common-lisp@su-ai
Subject: numerics and common-lisp

The following stuff was sent a while back to GLS, and seemed to
provoke no comment; although it probably raises more questions
than answers, here goes:

*** Issue 81: Complex numbers. Allow SQRT and LOG to produce results in
whatever form is necessary to deliver the mathematically defined result.

RJF:  This is problematical. The mathematically defined result is not
necessarily agreed upon.  Does Log(0) produce an error or a symbol?
(e.g. |log-of-zero| ?)  If a symbol, what happens when you try to
do arithmetic on it? Does sin(x) give up after some specified max x,
or continue to be a periodic function up to limit of machine range,
as on the HP 34?  Is accuracy specified in addition to precision?
Is it possible to specify rounding modes by flag setting or by
calling specific rounding-versions e.g. (plus-round-up x y) ? Such
features make it possible to implement interval arithmetic nicely.
Can one trap (signal, throw) on underflow, overflow,...
It would be a satisfying situation if common lisp, or at least a
superset of it, could exploit the IEEE standard. (Prof. Kahan would
much rather that language standardizers NOT delve too deeply into this,
leaving the semantics  (or "arithmetics") to specialists.)

Is it the case that a complex number could be implemented by
#C(x y) == (complex x y) ?  in which case  (real z) ==(cadr z),
(etc); Is a complex "atomic" in the lisp sense, or is it
the case that (eq (numerator #C(x y)) (numerator #C(x z)))?
Can one "rplac←numerator"?
If one is required to implement another type of atom for the
sake of rationals and another for complexes,
and another for ratios of complexes, then the
utility of this had better be substantial, and the implementation
cost modest.  In the case of x and y rational, there are a variety of
ways of representing x + i*y.  For example, it
is always possible to rationalize the denominator, but is it
required?
If  #R(1 2)  == (rat 1 2), is it the case that
(numerator r) ==(cadr r) ?  what is the numerator of (1/2+i)?

Even if you insist that all complex numbers are floats, not rationals,
you have multiple precisions to deal with.  Is it allowed to 
compute intermediate results to higher precision, or must one truncate
(or round) to some target precision in-between operations?

.......
Thus (SQRT -1.0) -> #C(0.0 1.0) and (LOG -1.0) -> #C(0.0 3.14159265).
Document all this carefully so that the user who doesn't care about
complex numbers isn't bothered too much.  As a rule, if you only play
with integers you won't see floating-point numbers, and if you only
play with non-complex numbers you won't see complex numbers.
.......
RJF: You've given 2 examples where, presumably, integers
are converted not only into floats, but into complex numbers. Your
rule does not seem to be a useful characterization. 
Note also that, for example, asin(1.5) is complex.

*** Issue 82: Branch cuts and boundary cases in mathematical
functions. Tentatively consider compatibility with APL on the subject of
branch cuts and boundary cases.
.......
RJF:Certainly gratuitous differences with APL, Fortran, PL/I etc are 
not a good idea!
.....

*** Issue 83: Fuzzy numerical comparisons. Have a new function FUZZY=
which takes three arguments: two numbers and a fuzz (relative
tolerance), which defaults in a way that depends on the precision of the
first two arguments.

.......
RJF: Why is this considered a language issue (in Lisp!), when the primary
language for numerical work (Fortran, not APL) does not?  The computation
of absolute and relative errors are sufficiently simple that not much
would be added by making this part of the language.)  I believe the fuzz business is used to cover
up the fact that some languages do not support integers. In such systems,
some computations  result in 1.99999 vs. 2.00000 comparisons, even though
both numbers are "integers". 

Incidentally, on "mod" of floats, I think that what you want is
like the "integer-part" of the IEEE proposal.  The EMOD instruction on 
the VAX is a brain-damaged attempt to do range-reductions.
.......

*** Issue 93: Complete set of trigonometric functions? Add ASIN, ACOS,
and TAN.


*** Issue 95: Hyperbolic functions. Add SINH, COSH, TANH, ASINH, ACOSH,
and ATANH.
.....
also useful are log(1+x) and exp(1+x).  


*** Issue 96: Are several versions of pi necessary? Eliminate the
variables SHORT-PI, SINGLE-PI, DOUBLE-PI, and LONG-PI, retaining only
PI.  Encourage the user to write such things as (SHORT-FLOAT PI),
(SINGLE-FLOAT (/ PI 2)), etc., when appropriate.
......
RJF: huh?  why not #.(times 4 (atan 1.0)),  #.(times 4 (atan 1.0d0)) etc.
It seems you are placing a burden on the implementors and discussants
of common lisp to write such trivial programs when the same thing
could be accomplished by a comment in the manual. Constants like e could
be handled too...

.......
.......
RJF: Sorry if the above comments sound overly argumentative.  I realize they
are in general not particularly constructive. 
I believe the group here at UCB will be making headway in many 
of the directions required as part of the IEEE support, and that Franz
will be extended.

∂20-Jan-82  2008	Daniel L. Weinreb <dlw at MIT-AI> 	Suggestion     
Date: Wednesday, 20 January 1982, 21:04-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: Suggestion    
To: RPG at SU-AI, common-lisp at SU-AI

Sounds good, unless it turns out to be difficult to figure out just
which things are the kernel and which aren't.  Also, when the kernel is
designed, things should be set up so that even if some higher-level
function is NOT in the kernel, it is still possible for some
implementations to write a higher-level function in "machine language"
if they want to, without losing when they load in gobs and gobs of
Lisp-coded higher-level stuff.

∂20-Jan-82  2234	Kim.fateman at Berkeley 	adding to kernel    
Date: 20 Jan 1982 22:04:29-PST
From: Kim.fateman at Berkeley
To: dlw@MIT-AI
Subject: adding to kernel
Cc: common-lisp@su-ai

One of the features of Franz which we addressed early on in the
design for the VAX was how we would link to system calls in UNIX, and
provide calling sequences and appropriate data structures for use
by other languages (C, Fortran, Pascal).  An argument could be made
that linkages of this nature could be done by message passing, if
necessary; an argument could be made that  CL will be so universal
that it would not be necessary to make such linkages at all.  I
have not found these arguments convincing in the past, though in
the perspective of a single CL virtual machine running on many machines,
they might seem better. 

I am unclear as to how many implementations of CL are anticipated, also:
for what machines; 
who will be doing them;
who will be paying for the work;
how much it will cost to get a copy (if CL is done "for profit");
how will maintenance and standardization happen (e.g. under ANSI?);

If these questions have been answered previously, please forgive my
ignorance/impertinence.


∂18-Jan-82  1537	Daniel L. Weinreb <DLW at MIT-AI> 	subseq and consing  
Date: Monday, 18 January 1982, 18:04-EST
From: Daniel L. Weinreb <DLW at MIT-AI>
Subject: subseq and consing
To: common-lisp at SU-AI

I agree that GLS's proposal is nice, that it is only acceptable if the
compiler optimizes it, and that it is very easy to optimize.  It is also
extremely clear to the reader of the program, and it cuts down on the
number of arguments that he has to remember.  This sounds OK to me.

∂18-Jan-82  2203	Scott.Fahlman at CMU-10A 	Re: Sequence functions  
Date: 19 January 1982 0103-EST (Tuesday)
From: Scott.Fahlman at CMU-10A
To: Guy.Steele at CMU-10A
Subject:  Re: Sequence functions
CC: common-lisp at su-ai
In-Reply-To:  <17Jan82 205656 GS70@CMU-10A>
Message-Id: <19Jan82 010338 SF50@CMU-10A>


Guy,

I agree that the index-range and the comparison-choice parameters are
orthogonal.  I like your proposal to use SUBSEQ for the ranges -- it
would appear to be no harder to optimize this in the compiler than to
do the equivalent keyword or optional argument thing, and the added
consing in interpreted code (only!)  should not make much difference.
And the semantics of what is going on with all the start and end
options now becomes crystal clear.  We would need a style suggestion in
the manual urging the programmer to use SUBSEQ for this and not some
random thing he cooks up, since the compiler will only recognize fairly
obvious cases.  Good idea!

I do not like the part of your proposal that relates to reordering the
arguments, on the grounds of gross incompatibility.  Unless we want to
come up with totally new names for all these functions, the change will
make it a real pain to move code and programmers over from Maclisp or
Franz.  Too high a price to pay for epsilon increase in elegance.  I
guess that of the suggestions I've seen so far, I would go with your
subseq idea for ranges and my keywords for specifying the comparison,
throwing out the IF family.

-- Scott

∂19-Jan-82  1551	RPG  	Suggestion    
To:   common-lisp at SU-AI  
I would like to make the following suggestion regarding the
strategy for designing Common Lisp. I'm not sure how to exactly
implement the strategy, but I think it is imperative we do something
like this soon.

We should separate the kernel from the Lisp based portions of the system
and design the kernel first. Lambda-grovelling, multiple values,
and basic data structures seem kernel. Sequence functions and names
can be done later.

The reason that we should do this is so that the many man-years of effort
to immplement a Common Lisp can be done in parallel with the design of
less critical things. 
			-rpg-

∂19-Jan-82  2113	Griss at UTAH-20 (Martin.Griss) 	Re: Suggestion        
Date: 19 Jan 1982 1832-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Re: Suggestion    
To: RPG at SU-AI, common-lisp at SU-AI
cc: Griss at UTAH-20
In-Reply-To: Your message of 19-Jan-82 1651-MST

I agree entirely. In terms of my 2 interests:
a) Implementing Common LISP kernel/compatibility in/for PSL
b) Getting our and other LISP tools working for Common LISP

I would very much like to see a clear effort NOW to isolate some of the
kernel features, and major implementation issues (data-types, user
control over storage manager, etc) so that some of us can implement
a kernel, and others can design extensions.
-------

∂19-Jan-82  2113	Fahlman at CMU-20C 	Re: Suggestion      
Date: 19 Jan 1982 2328-EST
From: Fahlman at CMU-20C
Subject: Re: Suggestion    
To: RPG at SU-AI
In-Reply-To: Your message of 19-Jan-82 1851-EST


Dick,
Your suggestion makes sense for implementations that are just getting started
now, but for those of us who have already got something designed, coded, an
close to up (and that includes most of the implementations that anyone now
cares about) I'm not sure that identifying and concentrating on a kernel is
a good move.  Sequence functions are quite pervasive and I, for one, would
like to see this issue settled soon.  Multiples, on the other hand, are fairly
localized.  Is there some implementation that is being particularly screwed
by the ordering of the current ad hoc agenda?
-- Scott
-------

I think it is possible for us to not define the kernel explicitly but to
identify those decisions that definitely apply to the kernel as opposed to
the non-kernel. It would seem that an established implementation would rather
know now about any changes to its kernel than later. I suggest that the
order of decisions be changed to decide `kernelish' issues first.
			-rpg-
∂20-Jan-82  1604	David A. Moon <MOON5 at MIT-AI> 	Keyword style sequence functions
Date: 20 January 1982 16:34-EST
From: David A. Moon <MOON5 at MIT-AI>
Subject: Keyword style sequence functions
To: common-lisp at SU-AI

Comments on Fahlman's Proposal for Keyword Style Sequence Functions for
Common Lisp of 16 January 1982

I think this is a good proposal and a step in the right direction.  There
are some problems with it, and also a couple issues that come to mind while
reading it.  I will first make some minor comments to get flamed up, and
then say what I really want to say.


- Negative comments first:

ELT and SETELT should be provided in type-specific versions.

My intuition suggests that MAP would be more readable with the result data
type before the function instead of after.  I don't really feel strongly
about this, but it's a suggestion.

I don't like the idea of flushing CONCAT (catenate) and making TO-LIST
allow multiple arguments, for some reason.

There is a problem with the :compare and :compare-not keywords.  For some
functions (the ones that take two sequences as arguments), the predicate is
really and truly an equality test.  It might be clearer to call it :equal.
For these functions I think it makes little sense to have a :compare-not.
Note that these are the same functions for which :if/:if-not are meaningless.
For other functions, such as POSITION, the predicate may not be a symmetric
equality predicate; you might be trying to find the first number in a list
greater than 50, or the number of astronauts whose grandmothers are not
ethnic Russians.  Here it makes sense to have a :compare-not.  It may actually
make sense to have a :compare keyword for these functions and a :equal
keyword for the others.  I'm not ecstatic about the name compare for this,
but I haven't thought of anything better.  This is only a minor esthetic
issue; I wouldn't really mind leaving things the way they are in Fahlman's
proposal.

Re :start and :end.  A nil value for either of these keywords should be
the same as not supplying it (i.e. the appropriate boundary of the sequence.)
This makes a lot of things simpler.  In :from-end mode, is the :start where
you start processing the sequence or the left-hand end of the subsequence?
In the Lisp machine, it is the latter, but either way would be acceptable.

The optional "count" argument to REMOVE and friends should be a keyword
argument.  This is more uniform, doesn't hurt anything, and is trivially
mechanically translatable from the old way.

The set functions, from ADJOIN through NSET-XOR, should not take keywords.
:compare-not is meaningless for these (unlike say position, where you would
use it to find the first element of a sequence that differed from a given
value).  That leaves only one keyword for these functions.  Also it is
-really- a bad idea to put keywords after an &rest argument (as in UNION).
I would suggest that the equal-predicate be a required first argument for
all the set functions; alternatively it could be an optional third argument
except for UNION and INTERSECTION, or those functions could be changed
to operate on only two sets like the others.  I think EQUAL is likely
to be the right predicate for set membership only in rare circumstances,
so that it would not hurt to make the predicate a required argument and
have no default predicate.

The :eq, :eql, :nequal, etc. keywords are really a bad idea.  The reasons
are:  1) They are non-uniform, with some keywords taking arguments and
some not.  See the tirade about this below.  2) They introduce an artificial
barrier between system-defined and user-defined predicates.  This is always
a bad idea, and here serves no useful purpose.  3) They introduce an
unesthetic interchangeability between foo and :foo, which can lead to
a significant amount of confusion.  If the keyword form of specifying the
predicate is too verbose, I would be much happier with making the predicate
be an optional argument, to be followed by keywords.  Personally I don't
think it is verbose enough to justify that.

There are still a lot of string functions in the Lisp machine not generalized
into sequence functions.  I guess it is best to leave that issue for future
generations and get on with the initial specification of Common Lisp.


- Negative comments not really related to the issue at hand:

"(the :string foo)".  Data type names cannot have colons, i.e. cannot be
keywords.  The reason is that the data type system is user-extensible, at
least via defstruct and certainly via other mechanisms such as flavors in
individual implementations and in future Common extensions.  This means
that it is important to be able to use the package system to avoid name
clashes between data types defined by different programs.  The standard
primitive data type names should be globals (or more exactly, should be
in the same package as the standard primitive functions that operate
on those data types.)

Lisp machine experience suggests that it is really not a good idea to have
some keywords take arguments and other keywords not take arguments.  It's a
bit difficult to explain why.  When you are just using these functions with
their keywords in a syntactic way, i.e. essentially as special forms, it
makes no difference except insofar as it makes the documentation more
confusing.  But when you start having programs processing the keywords,
i.e. using the sequence functions as functions rather than special forms,
all hell breaks loose if the syntax isn't uniform.  I think the slight
ugliness of an extra "t" sometimes is well worth it for the sake of
uniformity and simplicity.  On the Lisp machine, we've gone through an
evolution in the last couple of years in which keywords that don't take
arguments have been weeded out.

I don't think much of the scheme for having keywords be constants.  There
is nothing really bad about this except for the danger of confusing
novices, so I guess I could be talked into it, but I don't think getting
rid of the quote mark is a significant improvement (but perhaps it is in
some funny place on your keyboard, where you can't find it, rather than
lower case and to the right of the semicolon as is standard for
typewriters?)


- Minor positive comments

Making REPLACE take keywords is a good idea.

:start1/:end1/:start2/:end2 is a good idea.

The order of arguments to the compare/compare-not function needs to be
strictly defined (since it is not always a commutative function).  Presumably 
the right thing is to make its arguments come in the same order as the
arguments to the sequence function from which they derive.  Thus for SEARCH
the arguments would be an element of sequence1 followed by an element of
sequence2, while for POSITION the arguments would be the item followed
by an element of the sequence.

In addition to MEMQ, etc., would it be appropriate to have MEMQL, etc.,
which would use EQL as the comparison predicate?

MEMBER is a better name than POSITION for the predicate that tests for
membership of an element in a sequence, when you don't care about its
position and really want simply a predicate.  I am tempted to propose that
MEMBER be extended to sequences.  Of course, this would be a non-uniform
extension, since the true value would be T rather than a tail of a list (in
other words, MEMBER would be a predicate on sequences but a semi-predicate
on lists.)  This might be a nasty for novices, but it really seems worth
risking that.  Fortunately car, cdr, rplaca, and rplacd of T are errors in
any reasonable implementation, so that accidentally thinking that the truth
value is a list is likely to be caught immediately.


- To get down to the point:

The problems remaining after this proposal are basically two.  One is that there
is still a ridiculous family of "assoc" functions, and the other is that the
three proposed solutions to the -if/-if-not problem (flushing it, having an
optional argument before a required argument, or passing nil as a placeholder)
are all completely unacceptable.

My solution to the first problem is somewhat radical: remove ASSOC and all
its relatives from the language entirely.  Instead, add a new keyword,
:KEY, to the sequence functions.  The argument to :KEY is the function
which is given an element of the sequence and returns its "key", the object
to be fed to the comparison predicate.  :KEY would be accepted by REMOVE,
POSITION, COUNT, MEMBER, and DELETE.  This is the same as the new optional
argument to SORT (and presumably MERGE), which replaced SORTCAR and
SORTSLOT; but I guess we don't want to make those take keywords.  It is
also necessary to add a new sequence function, FIND, which takes arguments
like POSITION but returns the element it finds.  With a :compare of EQ and
no :key, FIND is (almost) trivial, but with other comparisons and/or other
keys, it becomes extremely useful.

The default value for :KEY would be #'ID or IBID or CR, whatever we call
the function that simply returns its argument [I don't like any of those
names much.]  Using #'CAR as the argument gives you ASSOC (from FIND),
MEMASSOC (from MEMBER), POSASSOC (from POSITION), and DELASSOC (from
DELETE).  Using #'CDR as the argument gives you the RASS- forms.  Of
course, usually you don't want to use either CAR or CDR as the key, but
some defstruct structure-element-accessor.

In the same way that it may be reasonable to keep MEMQ for historical
reasons and because it is used so often, it is probably good to keep
ASSQ and ASSOC.  But the other a-list searching functions are unnecessary.

My solution to the second problem is to put in separate functions for
the -if and -if-not case.  In fact this is a total of only 10 functions:

	remove-if	remove-if-not	position-if	position-if-not
	count-if	count-if-not	delete-if	delete-if-not
	find-if		find-if-not

MEMBER-IF and MEMBER-IF-NOT are identical to SOME and NOTEVERY if the above
suggestion about extending MEMBER to sequences is adopted, and if my memory
of SOME and NOTEVERY is correct (I don't have a Common Lisp manual here.)
If they are put in anyway, that still makes only 12 functions, which are
really only 6 entries in the manual since -if/-if-not pairs would be
documented together.

∂20-Jan-82  1631	Kim.fateman at Berkeley 	numerics and common-lisp 
Date: 20 Jan 1982 16:29:10-PST
From: Kim.fateman at Berkeley
To: common-lisp@su-ai
Subject: numerics and common-lisp

The following stuff was sent a while back to GLS, and seemed to
provoke no comment; although it probably raises more questions
than answers, here goes:

*** Issue 81: Complex numbers. Allow SQRT and LOG to produce results in
whatever form is necessary to deliver the mathematically defined result.

RJF:  This is problematical. The mathematically defined result is not
necessarily agreed upon.  Does Log(0) produce an error or a symbol?
(e.g. |log-of-zero| ?)  If a symbol, what happens when you try to
do arithmetic on it? Does sin(x) give up after some specified max x,
or continue to be a periodic function up to limit of machine range,
as on the HP 34?  Is accuracy specified in addition to precision?
Is it possible to specify rounding modes by flag setting or by
calling specific rounding-versions e.g. (plus-round-up x y) ? Such
features make it possible to implement interval arithmetic nicely.
Can one trap (signal, throw) on underflow, overflow,...
It would be a satisfying situation if common lisp, or at least a
superset of it, could exploit the IEEE standard. (Prof. Kahan would
much rather that language standardizers NOT delve too deeply into this,
leaving the semantics  (or "arithmetics") to specialists.)

Is it the case that a complex number could be implemented by
#C(x y) == (complex x y) ?  in which case  (real z) ==(cadr z),
(etc); Is a complex "atomic" in the lisp sense, or is it
the case that (eq (numerator #C(x y)) (numerator #C(x z)))?
Can one "rplac←numerator"?
If one is required to implement another type of atom for the
sake of rationals and another for complexes,
and another for ratios of complexes, then the
utility of this had better be substantial, and the implementation
cost modest.  In the case of x and y rational, there are a variety of
ways of representing x + i*y.  For example, it
is always possible to rationalize the denominator, but is it
required?
If  #R(1 2)  == (rat 1 2), is it the case that
(numerator r) ==(cadr r) ?  what is the numerator of (1/2+i)?

Even if you insist that all complex numbers are floats, not rationals,
you have multiple precisions to deal with.  Is it allowed to 
compute intermediate results to higher precision, or must one truncate
(or round) to some target precision in-between operations?

.......
Thus (SQRT -1.0) -> #C(0.0 1.0) and (LOG -1.0) -> #C(0.0 3.14159265).
Document all this carefully so that the user who doesn't care about
complex numbers isn't bothered too much.  As a rule, if you only play
with integers you won't see floating-point numbers, and if you only
play with non-complex numbers you won't see complex numbers.
.......
RJF: You've given 2 examples where, presumably, integers
are converted not only into floats, but into complex numbers. Your
rule does not seem to be a useful characterization. 
Note also that, for example, asin(1.5) is complex.

*** Issue 82: Branch cuts and boundary cases in mathematical
functions. Tentatively consider compatibility with APL on the subject of
branch cuts and boundary cases.
.......
RJF:Certainly gratuitous differences with APL, Fortran, PL/I etc are 
not a good idea!
.....

*** Issue 83: Fuzzy numerical comparisons. Have a new function FUZZY=
which takes three arguments: two numbers and a fuzz (relative
tolerance), which defaults in a way that depends on the precision of the
first two arguments.

.......
RJF: Why is this considered a language issue (in Lisp!), when the primary
language for numerical work (Fortran, not APL) does not?  The computation
of absolute and relative errors are sufficiently simple that not much
would be added by making this part of the language.)  I believe the fuzz business is used to cover
up the fact that some languages do not support integers. In such systems,
some computations  result in 1.99999 vs. 2.00000 comparisons, even though
both numbers are "integers". 

Incidentally, on "mod" of floats, I think that what you want is
like the "integer-part" of the IEEE proposal.  The EMOD instruction on 
the VAX is a brain-damaged attempt to do range-reductions.
.......

*** Issue 93: Complete set of trigonometric functions? Add ASIN, ACOS,
and TAN.


*** Issue 95: Hyperbolic functions. Add SINH, COSH, TANH, ASINH, ACOSH,
and ATANH.
.....
also useful are log(1+x) and exp(1+x).  


*** Issue 96: Are several versions of pi necessary? Eliminate the
variables SHORT-PI, SINGLE-PI, DOUBLE-PI, and LONG-PI, retaining only
PI.  Encourage the user to write such things as (SHORT-FLOAT PI),
(SINGLE-FLOAT (/ PI 2)), etc., when appropriate.
......
RJF: huh?  why not #.(times 4 (atan 1.0)),  #.(times 4 (atan 1.0d0)) etc.
It seems you are placing a burden on the implementors and discussants
of common lisp to write such trivial programs when the same thing
could be accomplished by a comment in the manual. Constants like e could
be handled too...

.......
.......
RJF: Sorry if the above comments sound overly argumentative.  I realize they
are in general not particularly constructive. 
I believe the group here at UCB will be making headway in many 
of the directions required as part of the IEEE support, and that Franz
will be extended.

∂20-Jan-82  2008	Daniel L. Weinreb <dlw at MIT-AI> 	Suggestion     
Date: Wednesday, 20 January 1982, 21:04-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: Suggestion    
To: RPG at SU-AI, common-lisp at SU-AI

Sounds good, unless it turns out to be difficult to figure out just
which things are the kernel and which aren't.  Also, when the kernel is
designed, things should be set up so that even if some higher-level
function is NOT in the kernel, it is still possible for some
implementations to write a higher-level function in "machine language"
if they want to, without losing when they load in gobs and gobs of
Lisp-coded higher-level stuff.

∂19-Jan-82  1448	Feigenbaum at SUMEX-AIM 	more on common lisp 
Scott:
	Here are some messages I received recently. I'm worried about
Hedrick and the Vax. I'm not too worried about Lisp Machine, you guys,
and us guys (S-1). I am also worried about Griss and Standard Lisp,
which wants to get on the bandwagon. I guess I'd like to settle kernel
stuff first, fluff later.

	I understand your worry about sequences etc. Maybe we could try
to split the effort of studying issues a little. I dunno. It was just
a spur of the moment thought.
			-rpg-

∂19-Jan-82  1448	Feigenbaum at SUMEX-AIM 	more on common lisp 
Date: 19 Jan 1982 1443-PST
From: Feigenbaum at SUMEX-AIM
Subject: more on common lisp
To:   gabriel at SU-AI

Mail-from: ARPANET host PARC-MAXC rcvd at 19-Jan-82 1331-PST
Date: 19 Jan 1982 13:12 PST
From: Masinter at PARC-MAXC
to: Feigenbaum@sumex-aim
Subject: Common Lisp- reply to Hedrick

It is a shame that such misinformation gets such rapid dissemination....

Date: 19 Jan 1982 12:57 PST
From: Masinter at PARC-MAXC
Subject: Re: CommonLisp at Rutgers
To: Hedrick@Rutgers
cc: Masinter

A copy of your message to "bboard at RUTGERS, griss at UTAH-20, admin.mrc at
SU-SCORE, jsol at RUTGERS" was forwarded to me. I would like to rebut some of
the points in it:

I think that Common Lisp has the potential for being a good lisp dialect which
will carry research forward in the future. I do not think, however, that people
should underestimate the amount of time before Common Lisp could possibly be a
reality.

The Common Lisp manual is nowhere near being complete. Given the current
rate of progress, the Common Lisp language definition would probably not be
resolved for two years--most of the hard issues have merely been deferred (e.g.,
T and NIL, multiple-values), and there are many parts of the manual which are
simply missing. Given the number of people who are joining into the discussion,
some drastic measures will have to be taken to resolve some of the more serious
problems within a reasonable timeframe (say a year).

Beyond that, the number of things which would have to be done to bring up a
new implementation of CommonLisp lead me to believe that the kernel for
another machine, such as the Dec-20, would take on the order of 5 man-years at
least. For many of the features in the manual, it is essential that the be built
into the kernel (most notably the arithmetic features and the multiple-value
mechanism) rather than in shared Lisp code. I believe that many of these may
make an implementation of Common Lisp more "difficult to implement efficiently
and cleanly" than Interlisp. 

I think that the Interlisp-VAX effort has been progressing quite well. They have
focused on the important problems before them, and are proceeding quite well. I
do not know for sure, but it is likely that they will deliver a useful system
complete with a programming enviornment long before the VAX/NIL project,
which has consumed much more resources. When you were interacting with the
group of Interlisp implementors at Xerox, BBN and ISI about implementing
Interlisp, we cautioned you about being optimistic about the amount of
manpower required. What seems to have happened is that you have come away
believing that Common Lisp would be easier to implement.  I don't think that is
the case by far.

Given your current manpower estimate (one full-time person and one RA) I do
not believe you have the critical mass to bring off a useful implemention of
Common Lisp. I would hate to see a replay of the previous situation with
Interlisp-VAX, where budgets were made and machines bought on the basis of a
hopeless software project. It is not that you are not competent to do a reasonable
job of implementation, it is just that creating a new implementation of an already
specified language is much much harder than merely creating a new
implementation of a language originally designed for another processor. 

I do think that an Interlisp-20 using extended virtual addressing might be
possible, given the amount of work that has gone into making Interlisp
transportable, the current number of compatible implementations (10, D, Jericho,
VAX) and the fact that Interlisp "grew up" in the Tenex/Tops-20 world, and that
some of the ordinarily more difficult problems, such as file names and operating
system conventions, are already tuned for that operating system. I think that a
year of your spare time and Josh for one month seems very thin.

Larry
-------

∂20-Jan-82  2132	Fahlman at CMU-20C 	Implementations
Date: 21 Jan 1982 0024-EST
From: Fahlman at CMU-20C
Subject: Implementations
To: rpg at SU-AI
cc: steele at CMU-20C, fahlman at CMU-20C

Dick,

I agree that, where a choice must be made, we should give first priority
to settling kernel-ish issues.  However, I think that the debate on
sequence functions is not detracting from more kernelish things, so I
see no reason not to go on with that.

Thanks for forwarding Masinter's note to me.  I found him to be awfully
pessimistic.  I believe that the white pages will be essentially complete
and in a form that just about all of us can agree on within two months.
Of course, the Vax NIL crowd (or anyone else, for that matter) could delay
ratification indefinitely, even if the rest of us have come together, but I
think we had best deal with that when the need arises.  We may have to
do something to force convergence if it does not occur naturally.  My
estimate may be a bit optimistic, but I don't see how anyone can look at
what has happened since last April and decide that the white pages will
not be done for two years.

Maybe Masinter's two years includes the time to develop all of the
yellow pages stuff -- editors, cross referencers, and so on.  If so, I
tend to agree with his estimate.  To an Interlisper, Common Lisp will
not offer all of the comforts of home until all this is done and stable,
and a couple of years is a fair estimate for all of this stuff, given
that we haven't really started thinking about this.  I certainly don't
expect the Interlisp folks to start flocking over until all this is
ready, but I think we will have the Perq and Vax implementations
together within 6 months or so and fairly stable within a year.

I had assumed that Guy had been keeping you informed of the negotiations
we have had with DEC on Common Lisp for VAX, but maybe he has not.  The
situation is this: DEC has been extremely eager to get a Common Lisp up
on Vax VMS, due to pressure from Slumberger and some other customers,
plus their own internal plans for building some expert systems.  Vax NIL
is not officially abandoned, but looks more and more dubious to them,
and to the rest of us.  A couple of months ago, I proposed to DEC that
we could build them a fairly decent compiler just by adding a
post-processor to the Spice Lisp byte-code compiler.  This
post-processor would turn the simple byte codes into in-line Vax
instructions and the more complex ones into jumps off to hand-coded
functions.  Given this compiler, one could then get a Lisp system up
simply by using the Common Lisp in Common Lisp code that we have
developed for Spice.  The extra effort to do the Vax implementation
amounts to only a few man-months and, once it is done, the system will
be totally compatible with the Spice implementation and will track any
improvements.  With some additional optimizations and a bit of tuning,
the performance of this sytem should be comparable to any other Lisp on
the Vax, and probably better than Franz.

DEC responded to this proposal with more enthusiasm than I expected.  It
is now nearly certain that they will be placing two DEC employees
(namely, ex-CMU grad students Dave McDonald and Water van Roggen) here
in Pittsburgh to work on this, with consulting by Guy and me.  The goal
is to get a Common Lisp running on the Vax in six months, and to spend
the following 6 months tuning and polishing.  I feel confident that this
goal will be met.  The system will be done first for VMS, but I think we
have convinced DEC that they should invest the epsilon extra effort
needed to get a Unix version up as well.

So even if MIT totally drops the ball on VAX NIL, I think that it is a
pretty safe bet that a Common Lisp for Vax will be up within a year.  If
MIT wins, so much the better: the world will have a choice between a
hairy NIL and a basic Common Lisp implementation.

We are suggesting to Chuck Hedrick that he do essentially the same thing
to bring up a Common Lisp for the extended-address 20.  If he does, then
this implementation should be done in finite time as well, and should
end up being fully compatible with the other systems.  If he decides
instead to do a traditinal brute-force implementation with lots of
assembly code, then I tend to agree with Masinter's view: it will take
forever.

I think we may have come up with an interesting kind of portability
here.  Anyway, I thought you would be interested in hearing all the
latest news on this.

-- Scott
-------

∂20-Jan-82  2234	Kim.fateman at Berkeley 	adding to kernel    
Date: 20 Jan 1982 22:04:29-PST
From: Kim.fateman at Berkeley
To: dlw@MIT-AI
Subject: adding to kernel
Cc: common-lisp@su-ai

One of the features of Franz which we addressed early on in the
design for the VAX was how we would link to system calls in UNIX, and
provide calling sequences and appropriate data structures for use
by other languages (C, Fortran, Pascal).  An argument could be made
that linkages of this nature could be done by message passing, if
necessary; an argument could be made that  CL will be so universal
that it would not be necessary to make such linkages at all.  I
have not found these arguments convincing in the past, though in
the perspective of a single CL virtual machine running on many machines,
they might seem better. 

I am unclear as to how many implementations of CL are anticipated, also:
for what machines; 
who will be doing them;
who will be paying for the work;
how much it will cost to get a copy (if CL is done "for profit");
how will maintenance and standardization happen (e.g. under ANSI?);

If these questions have been answered previously, please forgive my
ignorance/impertinence.


The known and suspected implementations for Common Lisp are:

	S-1 Mark IIA, paid for by ONR, done by RPG, GLS, Rod Brooks and others
	SPICELISP, paid for by ARPA, done by SEF, GLS, students, some RPG
	ZETALISP, paid for by Symbolics, by Symbolics
	VAX Common Lisp, probably paid for by DEC, done by CMU Spice personnel
	Extended addressing 20, probably paid for by DEC, done by Rutgers (Hedrick)
	68000, Burroughs, IBM, Various portable versions done by Utah group,
		paid for by ARPA (hopefully spoken).
	Retrofit to MacLisp by concerned citizens, maybe.
∂21-Jan-82  1746	Earl A. Killian <EAK at MIT-MC> 	SET functions    
Date: 21 January 1982 17:26-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  SET functions
To: Morrison at UTAH-20, RMS at MIT-AI
cc: common-lisp at SU-AI

Well if you're going to propose two changes like that, you might
as well do SETF -> SET, instead of SETF -> SETQ.  It's shorter
and people wouldn't wonder what the Q or F means.

But actually I'm not particularly in favor of eliminating the set
functions, even though I tend to use SETF instead myself, merely
because I don't see how their nonexistance would clean up
anything.

∂21-Jan-82  1803	Richard M. Stallman <RMS at MIT-AI>
Date: 21 January 1982 18:01-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: EAK at MIT-MC
cc: common-lisp at SU-AI

The point is not to get rid of the setting functions, but to
reduce their status in the documentation.  Actually getting rid of
them doesn't accomplish much, as you say, and also is too great
an incompatibility.  (For the same reason, SETF cannot be renamed
to SET, but can be renamed to SETQ).  But moving them all to an
appendix on compatibility and telling most users simply
"to alter anything, use SETF" is a tremendous improvement in
the simplicity of the language as perceived by users, even if
there is no change in the actual system that they use.
(At the same time, any plans to introduce new setting functions
that are not needed for compatibility can be canceled).

∂21-Jan-82  1844	Don Morrison <Morrison at UTAH-20> 
Date: 21 Jan 1982 1939-MST
From: Don Morrison <Morrison at UTAH-20>
To: RMS at MIT-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 21-Jan-82 1601-MST

I'm not convinced  that drastic  renamings (such  as SETF  => SET)  are
impractical.  Just as  you move the  documentation to a  "compatability
appendix", you move  the old  semantics to  a "compatability  package".
Old code must be run with  the reader interning in the MACLISP  package
or the Franz  LISP package, or  whatever.  The only  things which  must
really change  are the  programmers  -- and  I  believe the  effort  of
changing ones thoughts  to a  conceptually simpler LISP  would, in  the
long run, save  programmers time and effort.

There is, however, the problem of  maintenance of old code.  One  would
not like  to  have to  remember  seventeen  dialects of  LISP  just  to
maintain old  code.  But  I suspect  that maintenance  would  naturally
proceed by rewiting large  hunks of code, which  would then be done  in
the "clean" dialect.  LISP code is  not exempt from the usual  folklore
that  tweeking  broken  code  only  makes  it  worse.   This  is   just
conjecture; has experience on the  LISP Machine shown that old  MACLISP
code tends to get rewritten as it needs to change, or does it just  get
tweeked, mostly using those historical  atrocities left in for  MACLISP
compatability? 

It would be a shame to  see a standardized Common LISP incorporate  the
same sort of historical  abominations as those  which FORTRAN 77  lives
with.
-------

∂21-Jan-82  2053	George J. Carrette <GJC at MIT-MC> 
Date: 21 January 1982 23:50-EST
From: George J. Carrette <GJC at MIT-MC>
To: Morrison at UTAH-20
cc: RMS at MIT-AI, common-lisp at SU-AI

My experience with running macsyma in maclisp and lispm is that what
happens is that compatibility features are not quite compatible, and
that gross amounts of tweeking beyond the scope of a possibility in
FORTRAN 77 goes on. Much of the tweeking takes the form of adding
another layer of abstraction through macros, not using ANY known form
of lisp, but one which is a generalization, and obscure to anyone but
a macsyma-lisp hacker. At the same time the *really* gross old code
gets rewritten, when significant new features are provided, like
Pathnames.

Anyway, in NIL I wanted to get up macsyma as quickly as possible
without grossing out RLB or myself, or overloading NIL with so many
compatibility features, as happened in the Lispmachine. Also there
was that bad-assed T and NIL problem we only talked about a little
at the common-lisp meeting. [However, more severe problems, like the
fact that macsyma would not run with error-checking in CAR/CDR 
had already been fixed by smoking it out on the Lispmachine.]



∂21-Jan-82  1144	Sridharan at RUTGERS (Sri) 	S-1 CommonLisp   
Date: 21 Jan 1982 1435-EST
From: Sridharan at RUTGERS (Sri)
Subject: S-1 CommonLisp
To: rpg at SU-AI, guy.steele at CMU-10A

I have been kicking around an idea to build a multiprocessor aimed at
running some form of Concurrent Lisp as well my AI language AIMDS.
I came across S-1 project and it is clear I need to find out about
this project in detail.  Can you arrange to have me receive what
reports and documents are available on this project?

More recently, Hedrick mentioned in a note that there is an effort
to develop Lisp for the S-1.  How exciting!  Can you provide me
some background on this and describe the goals and current status?

My project is an attempt to develop coarse-grain parallelism in
a multprocessor environment, each processor being of the order of a
Lisp-machine, with a switching element between processors and memories,
with ability for the user/programmer to write ordinary Lisp code,
enhanced in places with necessary declarations and also new primitives
to make it feasible to take advantage of parallelism.  One of the
goals of the project is to support gradual conversion of existing
code to take advantage of available concurrency.

My mailing address is
N.S.Sridharan
Department of Computer Science
Rutgers University, Hill Center
New Brunswick, NJ 08903
-------

∂21-Jan-82  1651	Earl A. Killian <EAK at MIT-MC> 	SET functions    
Date: 21 January 1982 17:26-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  SET functions
To: Morrison at UTAH-20, RMS at MIT-AI
cc: common-lisp at SU-AI

Well if you're going to propose two changes like that, you might
as well do SETF -> SET, instead of SETF -> SETQ.  It's shorter
and people wouldn't wonder what the Q or F means.

But actually I'm not particularly in favor of eliminating the set
functions, even though I tend to use SETF instead myself, merely
because I don't see how their nonexistance would clean up
anything.

∂21-Jan-82  1803	Richard M. Stallman <RMS at MIT-AI>
Date: 21 January 1982 18:01-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: EAK at MIT-MC
cc: common-lisp at SU-AI

The point is not to get rid of the setting functions, but to
reduce their status in the documentation.  Actually getting rid of
them doesn't accomplish much, as you say, and also is too great
an incompatibility.  (For the same reason, SETF cannot be renamed
to SET, but can be renamed to SETQ).  But moving them all to an
appendix on compatibility and telling most users simply
"to alter anything, use SETF" is a tremendous improvement in
the simplicity of the language as perceived by users, even if
there is no change in the actual system that they use.
(At the same time, any plans to introduce new setting functions
that are not needed for compatibility can be canceled).

∂21-Jan-82  1844	Don Morrison <Morrison at UTAH-20> 
Date: 21 Jan 1982 1939-MST
From: Don Morrison <Morrison at UTAH-20>
To: RMS at MIT-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 21-Jan-82 1601-MST

I'm not convinced  that drastic  renamings (such  as SETF  => SET)  are
impractical.  Just as  you move the  documentation to a  "compatability
appendix", you move  the old  semantics to  a "compatability  package".
Old code must be run with  the reader interning in the MACLISP  package
or the Franz  LISP package, or  whatever.  The only  things which  must
really change  are the  programmers  -- and  I  believe the  effort  of
changing ones thoughts  to a  conceptually simpler LISP  would, in  the
long run, save  programmers time and effort.

There is, however, the problem of  maintenance of old code.  One  would
not like  to  have to  remember  seventeen  dialects of  LISP  just  to
maintain old  code.  But  I suspect  that maintenance  would  naturally
proceed by rewiting large  hunks of code, which  would then be done  in
the "clean" dialect.  LISP code is  not exempt from the usual  folklore
that  tweeking  broken  code  only  makes  it  worse.   This  is   just
conjecture; has experience on the  LISP Machine shown that old  MACLISP
code tends to get rewritten as it needs to change, or does it just  get
tweeked, mostly using those historical  atrocities left in for  MACLISP
compatability? 

It would be a shame to  see a standardized Common LISP incorporate  the
same sort of historical  abominations as those  which FORTRAN 77  lives
with.
-------

∂21-Jan-82  2053	George J. Carrette <GJC at MIT-MC> 
Date: 21 January 1982 23:50-EST
From: George J. Carrette <GJC at MIT-MC>
To: Morrison at UTAH-20
cc: RMS at MIT-AI, common-lisp at SU-AI

My experience with running macsyma in maclisp and lispm is that what
happens is that compatibility features are not quite compatible, and
that gross amounts of tweeking beyond the scope of a possibility in
FORTRAN 77 goes on. Much of the tweeking takes the form of adding
another layer of abstraction through macros, not using ANY known form
of lisp, but one which is a generalization, and obscure to anyone but
a macsyma-lisp hacker. At the same time the *really* gross old code
gets rewritten, when significant new features are provided, like
Pathnames.

Anyway, in NIL I wanted to get up macsyma as quickly as possible
without grossing out RLB or myself, or overloading NIL with so many
compatibility features, as happened in the Lispmachine. Also there
was that bad-assed T and NIL problem we only talked about a little
at the common-lisp meeting. [However, more severe problems, like the
fact that macsyma would not run with error-checking in CAR/CDR 
had already been fixed by smoking it out on the Lispmachine.]



∂22-Jan-82  1842	Fahlman at CMU-20C 	Re: adding to kernel
Date: 22 Jan 1982 2140-EST
From: Fahlman at CMU-20C
Subject: Re: adding to kernel
To: Kim.fateman at UCB-C70
cc: common-lisp at SU-AI
In-Reply-To: Your message of 21-Jan-82 0104-EST


The ability to link system calls and compiled routines written in the
barbarous tongues into Common Lisp will be important in some
implementations.  In others, this will be handled by inter-process
message passing (Spice) or by translating everything into Lisp or
Lispish byte-codes (Symbolics).  In any event, it seems clear that
features of this sort must be implementation-dependent packages rather
than parts of the Common Lisp core.

As for what implementations are planned, I know of the following that
are definitely underway: Spice Lisp, S1-NIL, VAX-NIL, and Zetalisp
(Symbolics).  Several other implementations (for Vax, Tops-20, IBM 4300
series, and a portable implementation from the folks at Utah) are being
considered, but it is probably premature to discuss the details of any
of these, since as far as I know none of them are definite as yet.  The
one implmentation I can discuss is Spice Lisp.

Spice is a multiple process, multiple language, portable computing
environment for powerful personal machines (i.e. more powerful than the
current generation of micros).  It is being developed by a large group
of people at CMU, with mostly ARPA funding.  Spice Lisp is the Common
Lisp implementation for personal machines running Spice.  Scott Fahlman
and Guy Steele are in charge.  The first implementation is for the Perq
1a with 16K microstore and 1 Mbyte main memory (it will NOT run on the
Perq 1).  We will probably be porting all of the Spice system, including
the Lisp, to the Symbolics 3600 when this machine is available, with
other implementations probably to follow.

The PERQ implementation will probably be distributed and maintained by
3RCC as one of the operating systems for the PERQ; we would hope to
develop similar arrangements with other manufacturers of machines on
which Spice runs, since we at CMU are not set up to do maintenance for
lots of customers ourselves.

Standardization for awhile will (we hope) be a result of adhering to the
Common Lisp Manual; once Common Lisp has had a couple of years to
settle, it might be worth freezing a version and going for ANSI
standardization, but not until then.
-------

∂22-Jan-82  1914	Fahlman at CMU-20C 	Multiple values
Date: 22 Jan 1982 2209-EST
From: Fahlman at CMU-20C
Subject: Multiple values
To: common-lisp at SU-AI


It has now been a week since I suggested flushing the lambda-list
versions of the multiple value catching forms.  Nobody has leapt up to
defend these, so I take it that nobody is as passionate about keeping
these around as I am about flushing them.  Therefore, unless strong
objections appear soon, I propose that we go with the simple Lisp
Machine versions plus M-V-Call in the next version of the manual.  (If,
once the business about lexical binding is resolved, it is clear that
these can easily be implemented as special cases of M-V-Call, we can put
them back in again.)

The CALL construct proposed by Stallman seems very strange and low-level
to me.  Does anyone really use this?  For what?  I wouldn't object to
having this around in a hackers-only package, but I'm not sure random
users ought to mess with it.  Whatever we do with CALL, I would like to
keep M-V-Call as well, as its use seems a good deal clearer without the
spreading and such mixed in.

-- Scott
-------

∂22-Jan-82  2132	Kim.fateman at Berkeley 	Re: adding to kernel
Date: 22 Jan 1982 21:27:03-PST
From: Kim.fateman at Berkeley
To: Fahlman@CMU-20C
Subject: Re: adding to kernel
Cc: common-lisp@su-ai

There is a difference between the "common lisp core" and the
"kernel" of a particular implementation.  The common lisp core
presumably would have a function which obtains the time.  Extended
common lisp might convert the time to Roman numerals.  The kernel
would have to have a function (in most cases, written in something
other than lisp) which obtains the time from the hardware or
operating system.  I believe that the common lisp core should be
delineated, and the extended common lisp (written in common lisp core)
should be mostly identical from system to system.  What I would like
to know, though, is what will be required of the kernel, because it
will enable one to say to a manufacturer, it is impossible to write
a common lisp for this architecture because it lacks (say) a real-time
clock, or does not support (in the UNIX parlance) "raw i/o", or

perhaps multiprocessing...

I hope that the results of common lisp discussions become available for
less than the $10k (or more) per cpu that keeps us at Berkeley from
using Scribe.  I have no objection to a maintenance organization, but
I hope copies of relevant programs (etc) are made available in an
unmaintained form for educational institutions or other worthy types.

Do the proprietor(s) of NIL think it is a "common lisp implementation"?
That is, if NIL and CL differ in specifications, will NIL change, or
will NIL be NIL, and a new thing, CL emerge?  If CL is sufficiently
well defined that, for example, it can be written in Franz Lisp with
some C-code added, presumably a CL compatibility package could be
written.  Would that make Franz a "common lisp implementation"?
(I am perfectly happy with the idea of variants of Franz; e.g. users
here have a choice of the CMU top-level or the (raw) default; they
can have a moderately interlisp-like set of functions ("defineq" etc.)
or the default maclisp-ish.  ).

∂23-Jan-82  0409	George J. Carrette <GJC at MIT-MC> 	adding to kernel   
Date: 23 January 1982 07:07-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  adding to kernel
To: Kim.fateman at UCB-C70
cc: common-lisp at SU-AI, Fahlman at CMU-20C

I don't know the exact delivery time for Symbolics new "L" machine,
nor the exact state of CMU spice-lisp, [which is on the front-burner
now for micro-coded implementation on their own machine no?] with
respect to any possible VAX implementation; but I suspect that of
all the lisp implementations planning to support the COMMON-LISP
standard, MIT's NIL is the closest to release. Can I get some
feedback on this?

As far as bucks go "$$$" gee. CPU's that can run lisp are not cheap
in themselves. However, I don't anything concrete about the
marketing of NIL. Here is a cute one, when the New Implementation of Lisp,
becomes the Old Implementation of Lisp, then NIL becomes OIL.
However, right now it is still NEW, so you don't have to worry.

Unstated assumptions (so far) in Common-lisp?
[1] Error-checking CAR/CDR by default in compiled code.
[2] Lispm-featurefull debugging in compiled code.

Maybe this need not be part of the standard, but everbody knows that
it is part of the usability and marketability of a modern lisp.

Here is my guess as to what NIL will look like by the time the UNIX
port is made: Virtual Machine written in SCHEME, with the SCHEME compiler
written in NIL producing standard UNIX assembler. NIL written in NIL,
and the common-lisp support written in NIL and common-lisp. A Maclisp
compatibility namespace supported by functions written in NIL.
VM for unix written in Scheme rather than "C" might seem strange to
some, but it comes from a life-long Unix/C hacker around here who
wants to raise the stakes a bit to make it interesting. You know, one
thing for sure around MIT => If it ain't interesting it ain't going to
get done! <= There being so many other things to do, not to even
mention other, possibly commercial organizations.



∂23-Jan-82  0910	RPG  
To:   common-lisp at SU-AI  
MV Gauntlet Picked Up
Ok. I believe that even if the implementation details are grossly different
all constructs that bind should have the same syntax. Thus,
if any MV construct binds, and is called ``-BIND'', ``-LAMBDA'', or
``-LET'', it should behave the same way as anything else that purports
to bind (like LAMBDA).  Since LET and LAMBDA are similar to most naive
users, too, I would like to see LET and LAMBDA be brought into line.

I would like a uniform, consistent language, so I strongly propose
either simplifying LAMBDA to be as simple as Lisp Machine multiple-value-bind
and using Lisp Machine style MV's as Scott suggests, or going to complex
LAMBDA, complex MV-lambda as in the current scheme, and flushing Lisp
Machine Multiple-value-bind. I propose not doing a mixture. 
			-rpg-

∂23-Jan-82  1841	Fahlman at CMU-20C  
Date: 23 Jan 1982 2136-EST
From: Fahlman at CMU-20C
To: RPG at SU-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 23-Jan-82 1210-EST


It seems clear to me that we MUST support two kinds of binding forms: a
simple-syntax form as in PROG and LET, and a more complex form as in
DEFUN and LAMBDA.  (Not to mention odd things like DO and PROGV that are
different but necessary.)  It clearly makes no sense to hair up PROG and
LET with optionals and rest args, since there is no possible use for
these things -- they would just confuse people and be a pain to
implement.  It is also clear that we are not going to abandon optionals
and rest args in DEFUN and LAMBDA in the name of uniformity -- they are
too big a win when you are defining functions that are going to be
called from a lot of different places, not all of them necessarily known
at compile-time.  So I don't really see what RPG is arguing for.  The
issue is not whether to support both a simple and a hairy syntax for
binding forms; the issue is simply which of these we want the
MV-catching forms to be.  And in answering that question, as in many
other places in the language, we must consider not only uniformity as
seen by Lisp theologians, but also implementation cost, runtime
efficiency, and what will be least confusing to the typical user.

-- Scott
-------

∂23-Jan-82  2029	Fahlman at CMU-20C 	Re:  adding to kernel    
Date: 23 Jan 1982 2319-EST
From: Fahlman at CMU-20C
Subject: Re:  adding to kernel
To: GJC at MIT-MC
cc: common-lisp at SU-AI
In-Reply-To: Your message of 23-Jan-82 0707-EST

In reply to GJC's recent message:

It is hard to comment on whether NIL is closer to being released than
other Common Lisp implementations, since you don't give us a time
estimate for NIL, and you don't really explain what you mean by
"released".  I understand that you have something turning over on
various machines at MIT, but it is unclear to me how complete this
version is or how much work has to be done to make it a Common Lisp
superset.  Also, how much manpower do you folks have left?

The PERQ implementation of Spice Lisp is indeed on our front burner.
Unfortumately, we do not yet have an instance of the PERQ 1a processor
upon which to run this.  The PERQ microcode is essentially complete and
has been debugged on an emulator.  The rest of the code, written in
Common Lisp itself, is being debugged on a different emulator.  If we
get get the manual settled soon and if 3RCC delivers the 1a soon, we
should have a Spartan but usable Common Lisp up by the start of the
summer.  The Perq 1a wil probably not be generally available until
mid-summer, given the delays in getting the prototype together.

By summer's end we should have an Emacs-like editor running, along with
some fairly nice debugging tools.  Of course, the system will be
improving for a couple of years beyond that as additional user amenities
appear.  I have no idea how long it will take 3RCC to start distributing
and supporting this Lisp, if that's what you mean by "release".  Their
customers might force them to move more quickly on this than they
otherwise would, but they have a lot of infrastructure to build -- no
serious Lispers over there at present.

As for your "unstated assumptions":

1. The amount of runtime error checking done by compiled code must be
left up to the various implementations, in general.  A machine like the
Vax will probably do less of this than a microcoded implementation, and
a native-code compiler may well want to give the user a compile-time
choice between some checking and maximum speed.  I think that the white
pages should just say "X is an error" and leave the question of how much
checking is done in compiled code to the various implementors.

2. The question of how (or whether) the user can debug compiled code is
also implementation-dependent, since the runtime representations and
stack formats may differ radically.  In addition, the user interface for
a debugging package will depend on the type of display used, the
conventions of the home system, and other such things, though one can
imagine that the debuggers on similar environments might make an effort
to look the same to the user.  The white pages should probably not
specify any debugging aids at all, or at most should specify a few
standard peeking functions that all implementations can easily support.

I agree that any Common Lisp implementation will need SOME decent debugging
aids before it will be taken seriously, but that does not mean that this
should be a part of the Common Lisp standard.

-- Scott
-------

∂24-Jan-82  0127	Richard M. Stallman <RMS at MIT-AI>
Date: 24 January 1982 04:24-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

I agree with Fahlman about binding constructs.
I want LAMBDA to be the way it is, and LET to be the way it is,
and certainly not the same.

As for multiple values, if LET is fully extended to do what
SETF can do, then (LET (((VALUES A B C) m-v-returning-form)) ...)
can be used to replace M-V-BIND, just as (SETF (VALUES A B C) ...)
can replace MULTIPLE-VALUES.  I never use MULTIPLE-VALUES any more
because I think that the SETF style is clearer.

∂24-Jan-82  0306	Richard M. Stallman <RMS at MIT-AI>
Date: 24 January 1982 06:02-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

I would like to clear up a misunderstanding that seems to be
prevalent.  The MIT Lisp machine system, used by Symbolics and LMI, is
probably going to be converted to support Common Lisp (which is the
motivation for my participation in the design effort for Common Lisp
clean).  Whenever this happens, Common Lisp will be available on
the CADR machine (as found at MIT and as sold by LMI and Symbolics)
and the Symbolics L machine (after that exists), and on the second
generation LMI machine (after that exists).

I can't speak for LMI's opinion of Common Lisp, but if MIT converts,
LMI will certainly do so.  As the main Lisp machine hacker at MIT, I
can say that I like Common Lisp.

It is not certain when either of the two new machines will appear, or
when the Lisp machine system itself will support Common Lisp.  Since
these three events are nearly independent, they could happen in any
order.

∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
Date: Sunday, 24 January 1982, 22:23-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
To: common-lisp at SU-AI

To clear up another random point: the name "Zetalisp" is not a Symbolics
proprietary name.  It is just a name that has been made up to replace
the ungainly name "Lisp Machine Lisp".  The reason for needing a name is
that I belive that people associate the Lisp Machine with Maclisp,
including all of the bad things that they have traditionally belived
about Maclisp, like that it has a user interface far inferior to that of
Interlisp.

I certainly hope that all of the Lisp Machines everywhere will convert
to Common Lisp together.

∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
Date: Sunday, 24 January 1982, 22:20-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
To: common-lisp at SU-AI

If I understand what RPG is saying then I think that I am not convinced
by his point.  I don't think that just because multiple-value-bind takes
a list of variables that are being bound to variables means that it HAS
to have all the features that LAMBDA combinations have, in the name of
language simplicity, because I just don't think that the inconsistency
there bothers me very much.  It is a very localized inconsistency and I
really do not belive it is going to confuse people much.

However, I still object to RMS's proposal as am still opposed to having
"destructuring LET".  I have flamed about this enough in the past that I
will not do it now.  However, having a "destructuring-bind" (by some
name) form that is like LET except that it destructures might be a
reasonable solution to providing a way to allow multiple-value-bind work
without any perceived language inconsistency.

∂24-Jan-82  2008	George J. Carrette <GJC at MIT-MC> 	adding to kernel   
Date: 24 January 1982 23:06-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  adding to kernel
To: Fahlman at CMU-20C
cc: common-lisp at SU-AI

    From: Fahlman at CMU-20C
    It is hard to comment on whether NIL is closer to being released than
    other Common Lisp implementations, since you don't give us a time
    estimate for NIL.

Oh. I had announced a release date of JAN 30. But, with the air-conditioner's
down for greater than a week that's got to go to at lease FEB 10. But
FEB 10 is the first week of classes at MIT, so I'll have JM, GJS, and
others on my case to get other stuff working. Sigh.
By release I mean that it is in a useful state, i.e. people will be able
to run their lisp programs in it. We have two concrete tests though, 
[1] To bring up "LSB".
   [A] This gives us stuff like a full hair FORMAT.
   [B] Martin's parser.
[2] TO run Macsyma on the BEGIN, SIN, MATRIX, ALGSYS, DEFINT, ODE2 and 
    HAYAT demos. 

Imagine bringing yourself and a tape to a naked VMS site, and installing
Emacs, a modern lisp, and Macsyma, in that order. You can really
blow away the people who have heard about these things but never
had a chance to use them, especially on their very own machine.
One feeling that makes the hacking worthwhile.

Anyway, when I brought Macsyma over to the Plasma Fusion
Center Alcator Vax, I was doing all the taylor series, integrals and
equation solving they threw at me. Stuff like
INTEGRATE(SIN(X↑2)*EXP(X↑2)*X↑2,X); Then DIFF it, then RATSIMP and TRIGREDUCE
to get back to the starting point.(try that on MC and see how many
files get loaded). (Sorry, gibberish to non-macsyma-hackers.)
=> So I can say that macsyma is released to MIT sites now. (MIT-LNS too). 
   People can use it and I'll field any bug reports. <=

Point of Confusion: Some people are confused as to what Common-Lisp is.
                    Even people at DEC.

-GJC

∂24-Jan-82  2227	Fahlman at CMU-20C 	Sequences 
Date: 25 Jan 1982 0125-EST
From: Fahlman at CMU-20C
Subject: Sequences
To: common-lisp at SU-AI


I have spent a couple of days mulling over RPG's suggestion for putting
the keywords into a list in functional position.  I thought maybe I
could get used to the unfamiliarity of the syntax and learn to like
this proposal.  Unfortunately, I can't.

I do like Guy's proposal for dropping START/END arguments and also
several of the suggestions that Moon made.  I am trying to merge all
of this into a revised proposal in the next day or two.  Watch this
space.

-- Scott
-------

∂24-Jan-82  2246	Kim.fateman at Berkeley 	NIL/Macsyma    
Date: 24 Jan 1982 22:40:50-PST
From: Kim.fateman at Berkeley
To: gjc@mit-mc
Subject: NIL/Macsyma 
Cc: common-lisp@SU-AI

Since it has been possible to run Macsyma on VMS sites (under Eunice or
its precursor) since April, 1980, (when we dropped off a copy at LCS),
it is not clear to me what GJC's ballyhoo is about.  If the physics
sites are only now getting a partly working Macsyma for VMS, it only
brings to mind the question of whether LCS ever sent out copies of the VMS-
Macsyma we gave them, to other MIT sites.

But getting Maclisp programs up under NIL should not be the benchmark,
nor is it clear what the relationship to common lisp is.
Having macsyma run under common lisp (whatever that will be)
would be very nice, of course,
whether having macsyma run under NIL is a step in that direction or
not.  It might also be nice to see, for example, one of the big interlisp
systems.

∂25-Jan-82  1558	DILL at CMU-20C 	eql => eq?   
Date: 25 Jan 1982 1857-EST
From: DILL at CMU-20C
Subject: eql => eq?
To: common-lisp at SU-AI

Proposal: rename the function "eq" in common lisp to be something like
"si:internal-eq-predicate", and the rename "eql" to be "eq".  This would
have several advantages.

 * Simplification by reducing the number of equality tests.

 * Simplification by reducing the number of different versions of
   various predicates that depend on the type of equality test you
   want.

 * Greater machine independence of lisp programs (whether eq and equal
   are the same function for various datatypes is heavily 
   implementation-dependent, while eql is defined to be relatively 
   machine-independent; furthermore, functions like memq in the current
   common lisp proposal make it easier to use eq comparisons than eql).

Possible disadvantages:

 * Do people LIKE having, say, numbers with identical values not be eq?
   If so, they won't like this.

 * Efficiency problems.

I don't believe the first complaint.  If there are no destructive
operations defined for an object, eq and equal ought to do the same
thing.

The second complaint should not be significant in interpreted code,
since overhead of doing a type-dispatch will probably be insignificant
in comparison with, say, finding the right subr and calling it.

In compiled code, taking the time to declare variable types should allow
the compiler to open-code "eq" into address comparisons, if appropriate,
even in the absence of a hairy compiler.  A hairy compiler could do even
better.

Finally, in the case where someone wants efficiency at the price of
tastefulness and machine-independence, the less convenient
implementation-dependent eq could be used.
-------

∂25-Jan-82  1853	Fahlman at CMU-20C 	Re: eql => eq? 
Date: 25 Jan 1982 2151-EST
From: Fahlman at CMU-20C
Subject: Re: eql => eq?
To: DILL at CMU-20C
cc: common-lisp at SU-AI
In-Reply-To: Your message of 25-Jan-82 1857-EST


I don't think it would be wise to replace EQ with EQL on a wholesale basis.
On microcoded machines, this can be made to win just fine and the added
tastefulness is worth it.  But Common Lisp has to run on vaxen and such as
well, and there the difference can be a factor of three.  In scattered
use, this would not be a problem, but EQ appears in many inner loops.
-- Scott
-------

∂27-Jan-82  1034	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: eql => eq?  
Date: 27 Jan 1982 1332-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: eql => eq?
To: DILL at CMU-20C
cc: common-lisp at SU-AI
In-Reply-To: Your message of 25-Jan-82 1857-EST

Possibly CL is turning into something so far from normal Lisp that I
can't use my experience with Lisp to judge it.  However in the Lisp
programming that I am used to, I often thought in terms of the actual
data structures I was building, not of course at the bit level, but at
least at the level of pointers.  When doing this sort of programming,
raw comparison of pointers was a conceptual primitive.  Certainly if you
are going to turn Lisp into ADA, which seems the trend in much recent
thinking (not just the CL design effort), EQ will clearly be, as you
say, an internal implementation primitive.  But if anyone wants to
continue to program as I did, then it will be nice to have the real EQ
around.  Now certainly in most cases where EQ is being used to compare
pointers, EQL will work just as well, since these two things differ only
on objects where EQ would not validly be used in the style of
programming I am talking about.  However it is still EQ that is the
conceptual primitive, and I somehow feel better about the language if
when I want to compare pointers I get a primitive that compares
pointers, and not one that tests to see whether what I have is something
that it thinks I should be able to compare and if not does some part of
EQUAL (or is that name out of date now, too?).
-------

∂27-Jan-82  1445	Jon L White <JONL at MIT-MC> 	Multiple mailing lists?  
Date: 27 January 1982 17:27-EST
From: Jon L White <JONL at MIT-MC>
Subject: Multiple mailing lists?
To: common-lisp at SU-AI

Is everyone on this mailing list also on the LISP-FORUM list?
I.e., is there anyone who did not get my note entitled "Two little 
suggestions for macroexpansion" which was just sent out to LISP-FORUM?

∂27-Jan-82  1438	Jon L White <JONL at MIT-MC> 	Two little suggestions for macroexpansion    
Date: 27 January 1982 17:24-EST
From: Jon L White <JONL at MIT-MC>
Subject: Two little suggestions for macroexpansion
To: LISP-FORUM at MIT-MC

Several times in the COMMON LISP discussions, individuals have
proffered a "functional" format to alleviate having lots of
keywords for simple operations: E.g. GLS's suggestion on page 137
of "Decisions on the First Draft Common Lisp Manual", which would
allow one to write 
  ((fposition #'equal x) s 0 7)  for  (position x s 0 7)
  ((fposition #'eq x) s 0 7)     for  (posq x s 0 7)

This format looks similar to something I've wanted for a long time
when macroexpanding, namely, for a form  
	foo = ((<something> . . .) a1 a2) 
then, provided that <something> isn't one of the special words for this 
context [like LAMBDA or (shudder!) LABEL] why not first expand 
(<something> . . .), yielding say <more>, and then try again on the form  
(<more> a1 a1).    Of course, (<something> . . .) may not indicate any 
macros, and <more> will just be eq to it.   The MacLISP function MACROEXPAND 
does do this, but EVAL doesn't call it in this circumstance (rather EVAL does 
a recursive sub-evaluation)

FIRST SUGGESTION:
     In the context of ((<something> . . .) a1 a2),  have EVAL macroexpand 
 the part (<something> . . .) before recursively evaluating it.

  This will have the incompatible effect that
    (defmacro foo () 'LIST)
    ((foo) 1 2)
  no longer causes an error (unbound variable for LIST), but will rather
  first expand into (list 1 2), which then evaluates to (1 2).
  Similarly, the sequence
    (defun foo () 'LIST)
    ((foo) 1 2)
  would now, incompatibly, result in an error.
  [Yes, I'd like to see COMMON LISP flush the aforesaid recursive evaluation, 
   but that's another kettle of worms we don't need to worry about now.]


SECOND SUGGESTION
    Let FMACRO have special significance for macroexpansion in the context
 ((FMACRO . <fun>) . . .), such that this form is a macro call which is
 expanded by calling <fun> on the whole form.


As a result of these two changes, many of the "functional programming
style" examples could easily be implemented by macros.  E.g.
  (defmacro FPOSITION (predfun arg)
    `(FMACRO . (LAMBDA (FORM) 
		 `(SI:POS-HACKER ,',arg 
				 ,@(cdr form) 
				 ':PREDICATE 
				 ,',predfun))))
where SI:POS-HACKER is a version of POSITION which accepts keyword arguments
to direct the actions, at the right end of the argument list.
Notice how 

    ((fposition #'equal x) a1 a2) 
==>
    ((fmacro . (lambda (form) 
		  `(SI:POS-HACKER X ,@(cdr form) ':PREDICATE #'EQUAL)))
	  a1
	  s2)
==>
    (SI:POS-HACKER X A1 A2 ':PREDICATE #'EQUAL)

If any macroexpansion "cache'ing" is going on, then the original form 
((fposition #'equal x) a1 a2)  will be paired with the final
result (SI:POS-HACKER X A1 A2 ':PREDICATE PREDFUN) -- e.g., either
by DISPLACEing, or by hashtable'ing such as MACROMEMO in PDP10 MacLISP.

Now unfortunately, this suggestion doesn't completely subsume the 
functional programming style, for it doesn't directly help with the
case mentioned by GLS:
  ((fposition (fnot #'numberp)) s)  for (pos-if-not #'numberp s)
Nor does it provide an easy way to use MAPCAR etc, since
  (MAPCAR (fposition #'equal x) ...)
doesn't have (fposition #'equal x) in the proper context.
[Foo, why not use DOLIST or LOOP anyway?]   Nevertheless, I've had many 
ocasions where I wanted such a facility, especially when worrying about 
speed of compiled code.  

Any coments?

∂27-Jan-82  2202	RPG  	MVLet    
To:   common-lisp at SU-AI  

My view of the multiple value issue is that returning multiple values is
more like a function call than like a function return.  One cannot use
multiple values except in those cases where they are caught and spread
into variables via a MVLet or whatever.  Thus, (f (g) (h)) will ignore all
but the first values of g and h in this context.  In both the function
call and multiple value return cases the procedure that is to receive
values does not know how many values to expect in some cases.  In
addition, I believe that it is important that a function, if it can return
more than one value, can return any number it likes, and that the
programmer should be able to capture all of them somehow, even if some
must end up in a list.  The Lisp Machine multiple value scheme cannot do
this.  If we buy that it is important to capture all the values somehow,
then one of two things must happen.  First, the syntax for MVLet has to
allow something like (mvlet (x y (:rest z)) ...)  or (mvlet (x y . z)
...), which is close to the LAMBDA (or at least DEFUN-LAMBDA) syntax,
which means that it is a cognitive confusion if these two binding
descriptions are not the same.  Or, second, we have to have a version
like (mvlet l ...) which binds l to the list of returned values etc. This
latter choice, I think, is a loser.

Therefore, my current stand is that we either 1, go for the decision we
made in Boston at the November meeting, 2, we allow only 2 values in the
mv case (this anticipates the plea that it is sure convenient to be able
to return a value and a flag...), or 3, we flush multiple values
altogether.  I find the Lisp Machine `solution' annoyingly contrary to
intuition (even more annoying than just allowing 2 values).
			-rpg-

∂28-Jan-82  0901	Daniel L. Weinreb <dlw at MIT-AI> 	MVLet     
Date: Thursday, 28 January 1982, 11:37-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: MVLet    
To: RPG at SU-AI, common-lisp at SU-AI

(1) Would you please remind me what conclusion we came to at the
November meeting?  My memory is that the issue was left up in the air
and that there was no conclusion.

(2) I think that removing multiple values, or restricting the number,
would be a terrible restriction.  Multiple values are extremely useful;
their lack has been a traditional weakness in Lisp and I'd hate to see
that go on.

(3) In Zetalisp you can always capture all values by using
(multiple-value-list <form>).  Any scheme that has only multiple-value
and multiple-value-bind and not multiple-value-list is clearly a loser;
the Lisp-Machine-like alternative has got to be a proposal that has all
three Zetalisp forms (not necessarily under those names, of course).

∂24-Jan-82  0127	Richard M. Stallman <RMS at MIT-AI>
Date: 24 January 1982 04:24-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

I agree with Fahlman about binding constructs.
I want LAMBDA to be the way it is, and LET to be the way it is,
and certainly not the same.

As for multiple values, if LET is fully extended to do what
SETF can do, then (LET (((VALUES A B C) m-v-returning-form)) ...)
can be used to replace M-V-BIND, just as (SETF (VALUES A B C) ...)
can replace MULTIPLE-VALUES.  I never use MULTIPLE-VALUES any more
because I think that the SETF style is clearer.

∂24-Jan-82  0306	Richard M. Stallman <RMS at MIT-AI>
Date: 24 January 1982 06:02-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

I would like to clear up a misunderstanding that seems to be
prevalent.  The MIT Lisp machine system, used by Symbolics and LMI, is
probably going to be converted to support Common Lisp (which is the
motivation for my participation in the design effort for Common Lisp
clean).  Whenever this happens, Common Lisp will be available on
the CADR machine (as found at MIT and as sold by LMI and Symbolics)
and the Symbolics L machine (after that exists), and on the second
generation LMI machine (after that exists).

I can't speak for LMI's opinion of Common Lisp, but if MIT converts,
LMI will certainly do so.  As the main Lisp machine hacker at MIT, I
can say that I like Common Lisp.

It is not certain when either of the two new machines will appear, or
when the Lisp machine system itself will support Common Lisp.  Since
these three events are nearly independent, they could happen in any
order.

∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
Date: Sunday, 24 January 1982, 22:23-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
To: common-lisp at SU-AI

To clear up another random point: the name "Zetalisp" is not a Symbolics
proprietary name.  It is just a name that has been made up to replace
the ungainly name "Lisp Machine Lisp".  The reason for needing a name is
that I belive that people associate the Lisp Machine with Maclisp,
including all of the bad things that they have traditionally belived
about Maclisp, like that it has a user interface far inferior to that of
Interlisp.

I certainly hope that all of the Lisp Machines everywhere will convert
to Common Lisp together.

∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
Date: Sunday, 24 January 1982, 22:20-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
To: common-lisp at SU-AI

If I understand what RPG is saying then I think that I am not convinced
by his point.  I don't think that just because multiple-value-bind takes
a list of variables that are being bound to variables means that it HAS
to have all the features that LAMBDA combinations have, in the name of
language simplicity, because I just don't think that the inconsistency
there bothers me very much.  It is a very localized inconsistency and I
really do not belive it is going to confuse people much.

However, I still object to RMS's proposal as am still opposed to having
"destructuring LET".  I have flamed about this enough in the past that I
will not do it now.  However, having a "destructuring-bind" (by some
name) form that is like LET except that it destructures might be a
reasonable solution to providing a way to allow multiple-value-bind work
without any perceived language inconsistency.

∂24-Jan-82  2008	George J. Carrette <GJC at MIT-MC> 	adding to kernel   
Date: 24 January 1982 23:06-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  adding to kernel
To: Fahlman at CMU-20C
cc: common-lisp at SU-AI

    From: Fahlman at CMU-20C
    It is hard to comment on whether NIL is closer to being released than
    other Common Lisp implementations, since you don't give us a time
    estimate for NIL.

Oh. I had announced a release date of JAN 30. But, with the air-conditioner's
down for greater than a week that's got to go to at lease FEB 10. But
FEB 10 is the first week of classes at MIT, so I'll have JM, GJS, and
others on my case to get other stuff working. Sigh.
By release I mean that it is in a useful state, i.e. people will be able
to run their lisp programs in it. We have two concrete tests though, 
[1] To bring up "LSB".
   [A] This gives us stuff like a full hair FORMAT.
   [B] Martin's parser.
[2] TO run Macsyma on the BEGIN, SIN, MATRIX, ALGSYS, DEFINT, ODE2 and 
    HAYAT demos. 

Imagine bringing yourself and a tape to a naked VMS site, and installing
Emacs, a modern lisp, and Macsyma, in that order. You can really
blow away the people who have heard about these things but never
had a chance to use them, especially on their very own machine.
One feeling that makes the hacking worthwhile.

Anyway, when I brought Macsyma over to the Plasma Fusion
Center Alcator Vax, I was doing all the taylor series, integrals and
equation solving they threw at me. Stuff like
INTEGRATE(SIN(X↑2)*EXP(X↑2)*X↑2,X); Then DIFF it, then RATSIMP and TRIGREDUCE
to get back to the starting point.(try that on MC and see how many
files get loaded). (Sorry, gibberish to non-macsyma-hackers.)
=> So I can say that macsyma is released to MIT sites now. (MIT-LNS too). 
   People can use it and I'll field any bug reports. <=

Point of Confusion: Some people are confused as to what Common-Lisp is.
                    Even people at DEC.

-GJC

∂24-Jan-82  2227	Fahlman at CMU-20C 	Sequences 
Date: 25 Jan 1982 0125-EST
From: Fahlman at CMU-20C
Subject: Sequences
To: common-lisp at SU-AI


I have spent a couple of days mulling over RPG's suggestion for putting
the keywords into a list in functional position.  I thought maybe I
could get used to the unfamiliarity of the syntax and learn to like
this proposal.  Unfortunately, I can't.

I do like Guy's proposal for dropping START/END arguments and also
several of the suggestions that Moon made.  I am trying to merge all
of this into a revised proposal in the next day or two.  Watch this
space.

-- Scott
-------

∂24-Jan-82  2246	Kim.fateman at Berkeley 	NIL/Macsyma    
Date: 24 Jan 1982 22:40:50-PST
From: Kim.fateman at Berkeley
To: gjc@mit-mc
Subject: NIL/Macsyma 
Cc: common-lisp@SU-AI

Since it has been possible to run Macsyma on VMS sites (under Eunice or
its precursor) since April, 1980, (when we dropped off a copy at LCS),
it is not clear to me what GJC's ballyhoo is about.  If the physics
sites are only now getting a partly working Macsyma for VMS, it only
brings to mind the question of whether LCS ever sent out copies of the VMS-
Macsyma we gave them, to other MIT sites.

But getting Maclisp programs up under NIL should not be the benchmark,
nor is it clear what the relationship to common lisp is.
Having macsyma run under common lisp (whatever that will be)
would be very nice, of course,
whether having macsyma run under NIL is a step in that direction or
not.  It might also be nice to see, for example, one of the big interlisp
systems.

∂25-Jan-82  1436	Hanson at SRI-AI 	NIL and DEC VAX Common LISP
Date: 25 Jan 1982 1436-PST
From: Hanson at SRI-AI
Subject: NIL and DEC VAX Common LISP
To:   rpg at SU-AI
cc:   hanson

Greetings:
	I understand from ARPA that DEC VAX Common Lisp may become a
reality and that you are closely involved.  If that is true, we in the
SRI vision group would like to work closely with you in defining the
specifications so that the resulting language can actually be used for
vision computations with performance and convenience comparable to
Algol-based languages.
	If this is not true, perhaps you can send me to the people
I should talk with to make sure the mistakes of FRANZLISP are not
repeated in COMMON LISP.
	Thanks,  Andy Hanson  859-4395

ps - Where can we get Common Lisp manuals?
-------

∂25-Jan-82  1558	DILL at CMU-20C 	eql => eq?   
Date: 25 Jan 1982 1857-EST
From: DILL at CMU-20C
Subject: eql => eq?
To: common-lisp at SU-AI

Proposal: rename the function "eq" in common lisp to be something like
"si:internal-eq-predicate", and the rename "eql" to be "eq".  This would
have several advantages.

 * Simplification by reducing the number of equality tests.

 * Simplification by reducing the number of different versions of
   various predicates that depend on the type of equality test you
   want.

 * Greater machine independence of lisp programs (whether eq and equal
   are the same function for various datatypes is heavily 
   implementation-dependent, while eql is defined to be relatively 
   machine-independent; furthermore, functions like memq in the current
   common lisp proposal make it easier to use eq comparisons than eql).

Possible disadvantages:

 * Do people LIKE having, say, numbers with identical values not be eq?
   If so, they won't like this.

 * Efficiency problems.

I don't believe the first complaint.  If there are no destructive
operations defined for an object, eq and equal ought to do the same
thing.

The second complaint should not be significant in interpreted code,
since overhead of doing a type-dispatch will probably be insignificant
in comparison with, say, finding the right subr and calling it.

In compiled code, taking the time to declare variable types should allow
the compiler to open-code "eq" into address comparisons, if appropriate,
even in the absence of a hairy compiler.  A hairy compiler could do even
better.

Finally, in the case where someone wants efficiency at the price of
tastefulness and machine-independence, the less convenient
implementation-dependent eq could be used.
-------

∂25-Jan-82  1853	Fahlman at CMU-20C 	Re: eql => eq? 
Date: 25 Jan 1982 2151-EST
From: Fahlman at CMU-20C
Subject: Re: eql => eq?
To: DILL at CMU-20C
cc: common-lisp at SU-AI
In-Reply-To: Your message of 25-Jan-82 1857-EST


I don't think it would be wise to replace EQ with EQL on a wholesale basis.
On microcoded machines, this can be made to win just fine and the added
tastefulness is worth it.  But Common Lisp has to run on vaxen and such as
well, and there the difference can be a factor of three.  In scattered
use, this would not be a problem, but EQ appears in many inner loops.
-- Scott
-------

∂28-Jan-82  0901	Daniel L. Weinreb <dlw at MIT-AI> 	MVLet     
Date: Thursday, 28 January 1982, 11:37-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: MVLet    
To: RPG at SU-AI, common-lisp at SU-AI

(1) Would you please remind me what conclusion we came to at the
November meeting?  My memory is that the issue was left up in the air
and that there was no conclusion.

(2) I think that removing multiple values, or restricting the number,
would be a terrible restriction.  Multiple values are extremely useful;
their lack has been a traditional weakness in Lisp and I'd hate to see
that go on.

(3) In Zetalisp you can always capture all values by using
(multiple-value-list <form>).  Any scheme that has only multiple-value
and multiple-value-bind and not multiple-value-list is clearly a loser;
the Lisp-Machine-like alternative has got to be a proposal that has all
three Zetalisp forms (not necessarily under those names, of course).

∂28-Jan-82  1235	Fahlman at CMU-20C 	Re: MVLet      
Date: 28 Jan 1982 1522-EST
From: Fahlman at CMU-20C
Subject: Re: MVLet    
To: RPG at SU-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 28-Jan-82 0102-EST


I agree with DLW that we must retain M-V-LIST.  I never meant to exclude
that.

As for RPG's latest blast, I agree with some of his arguments but not
with his conclusions.  First, I think that the way multiple values are
actually used, in the overwhelming majority of cases, is more like a
return than a function call.  You call INTERN or FLOOR or some
user-written function, and you know what values it is going to return,
what each value means, and which ones you want to use.  In the case of
FLOOR, you might want the quotient or the remainder or both.  The old,
simple, Lisp Machine forms give you a simple and convenient way to
handle this common case.  If a function returns two often-used values
plus some others that are arcane and hard to remember, you just catch
the two you want and let the others (however many there are) evaporate.
M-V-LIST is available to programs (tracers for example) that want to
intercept all the values, no matter what.

Having said that, I agree that there are also some cases where you want
the catching of values to be more like a function call than a return,
since it may be somewhat unpredictable what is going to be bubbling up
from below, and the lambda list with optionals and rests has evolved as
a good way to handle this.  I submit that the cause of uniformity is
best served by actually making these cases be function calls, rather
than faking it.  The proposed M-V-CALL mechanism does exactly this when
given one value-returning "argument".  The proposal to let M-V-CALL
take more than one "argument" form is dangerous, in my view -- it could
easily lead to impenetrable and unmaintainable code -- but if it makes
John McCarthy happy, I'm willing to leave it in, perhaps with a warning
to users not to go overboard with this.

So I think RPG has made a strong case for needing something like
M-V-CALL, and I propose that M-V-CALL itself is the best form for this.
I am much less convinced by his argument that the multiple value SETQing
and BINDing forms have to be beaten into this same shape or thrown out
altogether.  Simple forms for simple things!

And even if RPG's aestheitc judgement were to prevail, I would still
have the problem that, because they have the semantics of PROGNs and not
of function calls, the Lambda-list versions of these functions would be
extremely painful to implement.

As I see it, if RPG wants to have a Lambda-binding form for value
catching, M-V-CALL gives this to him in a way that is clean and easily
implementable.  If what he wants is NOT to have the simple Lisp Machine
forms included, and to force everything through Lambda-list forms in the
name of uniformity, then we have a real problem.

-- Scott
-------

∂28-Jan-82  1416	Richard M. Stallman <rms at MIT-AI> 	Macro expansion suggestions 
Date: 28 January 1982 17:13-EST
From: Richard M. Stallman <rms at MIT-AI>
Subject: Macro expansion suggestions
To: common-lisp at SU-AI

If (fposition #'equal x) is defined so that when in function position
it "expands" to a function, then (mapcar (fposition ...)) loses
as JONL says, but (mapcar #'(fposition ...)...) can perhaps be
made to win.  If (function (fposition...)) expands itself into
(function (lambda (arg arg...) ((fposition ...) arg arg...)))
it will do the right thing.  The only problem is to determine
how many args are needed, which could be a property of the symbol
fposition, or could appear somewhere in its definition.

Alternatively, the definition of fposition could have two "operations"
defined: one to expand when given an ordinary form with (fposition ...)
as its function, and one to expand when given an expression to apply
(fposition ...) to.

∂28-Jan-82  1914	Howard I. Cannon <HIC at MIT-MC> 	Macro expansion suggestions    
Date: 28 January 1982 19:46-EST
From: Howard I. Cannon <HIC at MIT-MC>
Subject:  Macro expansion suggestions
To: common-lisp at SU-AI


I have sent the following to GLS as a proposal Lambda Macros in for
Common Lisp.  It is implemented on the Lisp Machine, and is installed
in Symbolics system 202 (unreleased), and will probably be in MIT
system 79.

You could easily use them to implement functional programming style,
and they of course work with #' as RMS suggests.

The text is in Bolio input format, sorry.

--------

.section Lambda macros

Lambda macros may appear in functions where LAMBDA would have previously
appeared.  When the compiler or interpreter detects a function whose CAR
is a lambda macro, they "expand" the macro in much the same way that
ordinary Lisp macros are expanded -- the lambda macro is called with the
function as its argument, and is expected to return another function as
its value.  Lambda macros may be accessed with the (ε3:lambda-macroε*
ε2nameε*) function specifier.

.defspec lambda-macro function-spec lambda-list &body body
Analagously with ε3macroε*, defines a lambda macro to be called
ε2function-specε*. ε2lambda-listε* should consist of one variable, which
will be the function that caused the lambda macro to be called.  The
lambda macro must return a function.  For example:

.lisp
(lambda-macro ilisp (x)
  `(lambda (&optional ,@(second x) &rest ignore) . ,(cddr x)))
.end←lisp

would define a lambda macro called ε3ilispε* which would cause the
function to accept arguments like a standard Interlisp function -- all
arguments are optional, and extra arguments are ignored.  A typical call
would be:

.lisp
(fun-with-functional-arg #'(ilisp (x y z) (list x y z)))
.end←lisp

Then, any calls to the functional argument that
ε3fun-with-functional-argε* executes will pass arguments as if the
number of arguments did not matter.
.end←defspec

.defspec deflambda-macro
ε3deflambda-macroε* is like ε3defmacroε*, but defines a lambda macro
instead of a normal macro.
.end←defspec

.defspec deflambda-macro-displace
ε3deflambda-macro-displaceε* is like ε3defmacro-displaceε*, but defines
a lambda macro instead of a normal macro.
.end←defspec

.defspec deffunction function-spec lambda-macro-name lambda-list &body body 
ε3deffunctionε* defines a function with an arbitrary lambda macro
instead of ε3lambdaε*.  It takes arguments like ε3defunε*, expect that
the argument immediatly following the function specifier is the name of
the lambda macro to be used.  ε3deffunctionε* expands the lambda macro
immediatly, so the lambda macro must have been previously defined.

For example:

.lisp
(deffunction some-interlisp-like-function ilisp (x y z)
  (list x y z))
.end←lisp

would define a function called ε3some-interlisp-like-functionε*, that
would use the lambda macro called ε3ilispε*.  Thus, the function would
do no number of arguments checking.
.end←defspec

∂27-Jan-82  1633	Jonl at MIT-MC Two little suggestions for macroexpansion
Several times in the COMMON LISP discussions, individuals have
proffered a "functional" format to alleviate having lots of
keywords for simple operations: E.g. GLS's suggestion on page 137
of "Decisions on the First Draft Common Lisp Manual", which would
allow one to write 
  ((fposition #'equal x) s 0 7)  for  (position x s 0 7)
  ((fposition #'eq x) s 0 7)     for  (posq x s 0 7)

This format looks similar to something I've wanted for a long time
when macroexpanding, namely, for a form  
	foo = ((<something> . . .) a1 a2) 
then, provided that <something> isn't one of the special words for this 
context [like LAMBDA or (shudder!) LABEL] why not first expand 
(<something> . . .), yielding say <more>, and then try again on the form  
(<more> a1 a1).    Of course, (<something> . . .) may not indicate any 
macros, and <more> will just be eq to it.   The MacLISP function MACROEXPAND 
does do this, but EVAL doesn't call it in this circumstance (rather EVAL does 
a recursive sub-evaluation)

FIRST SUGGESTION:
     In the context of ((<something> . . .) a1 a2),  have EVAL macroexpand 
 the part (<something> . . .) before recursively evaluating it.

  This will have the incompatible effect that
    (defmacro foo () 'LIST)
    ((foo) 1 2)
  no longer causes an error (unbound variable for LIST), but will rather
  first expand into (list 1 2), which then evaluates to (1 2).
  Similarly, the sequence
    (defun foo () 'LIST)
    ((foo) 1 2)
  would now, incompatibly, result in an error.
  [Yes, I'd like to see COMMON LISP flush the aforesaid recursive evaluation, 
   but that's another kettle of worms we don't need to worry about now.]


SECOND SUGGESTION
    Let FMACRO have special significance for macroexpansion in the context
 ((FMACRO . <fun>) . . .), such that this form is a macro call which is
 expanded by calling <fun> on the whole form.


As a result of these two changes, many of the "functional programming
style" examples could easily be implemented by macros.  E.g.
  (defmacro FPOSITION (predfun arg)
    `(FMACRO . (LAMBDA (FORM) 
		 `(SI:POS-HACKER ,',arg 
				 ,@(cdr form) 
				 ':PREDICATE 
				 ,',predfun))))
where SI:POS-HACKER is a version of POSITION which accepts keyword arguments
to direct the actions, at the right end of the argument list.
Notice how 

    ((fposition #'equal x) a1 a2) 
==>
    ((fmacro . (lambda (form) 
		  `(SI:POS-HACKER X ,@(cdr form) ':PREDICATE #'EQUAL)))
	  a1
	  s2)
==>
    (SI:POS-HACKER X A1 A2 ':PREDICATE #'EQUAL)

If any macroexpansion "cache'ing" is going on, then the original form 
((fposition #'equal x) a1 a2)  will be paired with the final
result (SI:POS-HACKER X A1 A2 ':PREDICATE PREDFUN) -- e.g., either
by DISPLACEing, or by hashtable'ing such as MACROMEMO in PDP10 MacLISP.

Now unfortunately, this suggestion doesn't completely subsume the 
functional programming style, for it doesn't directly help with the
case mentioned by GLS:
  ((fposition (fnot #'numberp)) s)  for (pos-if-not #'numberp s)
Nor does it provide an easy way to use MAPCAR etc, since
  (MAPCAR (fposition #'equal x) ...)
doesn't have (fposition #'equal x) in the proper context.
[Foo, why not use DOLIST or LOOP anyway?]   Nevertheless, I've had many 
ocasions where I wanted such a facility, especially when worrying about 
speed of compiled code.  

Any coments?

∂28-Jan-82  1633	Fahlman at CMU-20C 	Re: Two little suggestions for macroexpansion
Date: 28 Jan 1982 1921-EST
From: Fahlman at CMU-20C
Subject: Re: Two little suggestions for macroexpansion
To: JONL at MIT-MC
cc: LISP-FORUM at MIT-MC
In-Reply-To: Your message of 27-Jan-82 1724-EST


JONL's suggestion looks pretty good to me.  Given this sort of facility,
it would be easier to experiment with functional styles of programming,
and nothing very important is lost in the way of useful error checking,
at least nothing that I can see.

"Experiment" is a key word in the above comment.  I would not oppose the
introduction of such a macro facility into Common Lisp, but I would be
very uncomfortable if a functional-programming style started to pervade
the base language -- I think we need to play with such things for a
couple of years before locking them in.

-- Scott
-------

∂29-Jan-82  0945	DILL at CMU-20C 	Re: eql => eq?    
Date: 29 Jan 1982 1221-EST
From: DILL at CMU-20C
Subject: Re: eql => eq?
To: HEDRICK at RUTGERS
cc: common-lisp at SU-AI
In-Reply-To: Your message of 27-Jan-82 1332-EST

If an object in a Common Lisp is defined to have a particular type of
semantics (basically, you would like it to be an "immediate" object if
you could only implement that efficiently), programmers should not have
to worry about whether it is actually implemented using pointers.  If
you think about your data structures in terms of pointers in the
implementation, I contend that you are thinking about them at the wrong
level (unless you have decided to sacrifice commonality in order to
wring nanoseconds out of your code).  The reason you have to think about
it at this level is that the Lisp dialect you use lets the
implementation shine through when it shouldn't.

With the current Common Lisp definition, users will have to go to extra
effort to write implementation-independent code. For example, if your
implementation makes all numbers (or characters or whatever) that are
EQUAL also EQ, you will have to stop and force yourself to use MEMBER or
MEM instead of MEMQ, because other implementations may use pointer
implementations of numbers (or worse, your program will work for some
numbers and not others, because you are in a maclisp compatibility mode
and numbers less than 519 are immediate but others aren't).  My belief
is that Common Lisp programs should end up being common, unless the user
has made a conscious decision to make his code implementation-dependent.
The only reason to decide against a feature that would promote this is
if it would result in serious performance losses.

Even if an implementation is running on a VAX, it is still possible to
declare data structures (with the proposed "THE" construct, perhaps) so
that compiler can know to use the internal EQ when possible, or to use a
more specific predicate.  It is also not clear if compiled code for EQL
has to be expensive, depending on how hard it is to determine the type
of a datum -- it doesn't seem totally unreasonable that a single
instruction could determine whether to use the internal EQ (a single
instruction), or the hairier EQL code.

In what way is this "turning Lisp into Ada"?
-------

∂29-Jan-82  1026	Guy.Steele at CMU-10A 	Okay, you hackers
Date: 29 January 1982 1315-EST (Friday)
From: Guy.Steele at CMU-10A
To: Fateman at UCB-C70, gjc at MIT-MC
Subject:  Okay, you hackers
CC: common-lisp at SU-AI
Message-Id: <29Jan82 131549 GS70@CMU-10A>

It would be of great interest to the entire LISP community, now that
MACSYMA is up and running on VAX on two different LISPs, to get some
comparative timings.  There are standard MACSYMA demo files, and MACSYMA
provides for automatic timing.  Could you both please run the set of demo
files GJC mentioned, namely BEGIN, SIN, MATRIX, ALGSYS, DEFINT, ODE2, and
HAYAT, and send the results to RPG@SAIL for analysis?  (You're welcome,
Dick!)
--Guy

∂29-Jan-82  1059	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: eql => eq?  
Date: 29 Jan 1982 1354-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: eql => eq?
To: DILL at CMU-20C
cc: common-lisp at SU-AI
In-Reply-To: Your message of 29-Jan-82 1221-EST

I have gotten two rejoinders to my comments about the conceptual
usefulness of EQ, both of which explained to me that EQ is not useful
for numbers or any other objects which may be immediate in some
implementations and pointers in others.  I am well aware of that.
Clearly if I am interested either in comparing the values of two numbers
or in seeing whether two general objects will look the same when
printed, EQ is not the right thing to use.  But this has been true back
from the days of Lisp 1.5.  I claim however that there are many cases
where I know that what I am dealing with is in fact a pointer, and what
I want is something that simply checks to see whether two objects are
identical.  In this case, I claim that it is muddying the waters
conceptually to use a primitive that checks for certain kinds of objects
and does tests oriented towards seeing whether they look the same when
printed, act the same when multiplied, or something else.  Possibly it
would be sensible to have a primitive that works like EQ for pointers
and gives an error otherwise.  But if what you are trying to do is to
see whether two literal atoms or CONS cells are the same, I can't see
any advantage to something that works like EQ for pointers and does
something else otherwise.  I can even come up with cases where EQ makes
sense for real numbers.  I can well imagine a program where you have two
lists, one of which is a proper subset of the other.   Depending upon
how they were constructed, it might well be the case that if something
from the larger list is a member of the smaller list, it is a member
using EQ, even if the object involved is a real number. I trust that the
following code will always print T, even if X is a real number.
   (SETQ BIG-LIST (CONS X BIG-LIST))
   (SETQ SMALL-LIST (CONS X SMALL-LIST))
   (PRINT (EQ (CAR BIG-LIST) (CAR SMALL-LIST)))
-------

∂29-Jan-82  1146	Guy.Steele at CMU-10A 	MACSYMA timing   
Date: 29 January 1982 1442-EST (Friday)
From: Guy.Steele at CMU-10A
To: George J. Carrette <GJC at MIT-MC> 
Subject:  MACSYMA timing
CC: common-lisp at SU-AI
In-Reply-To:  George J. Carrette's message of 29 Jan 82 13:30-EST
Message-Id: <29Jan82 144201 GS70@CMU-10A>

Well, I understand your reluctance to release timings before the
implementation has been properly tuned; but on the other hand,
looking at the situation in an abstract sort of way, I don't understand
why someone willing to shoot off his mouth and take unsupported pot
shots in a given forum should be unwilling to provide in that same
forum some objective data that might help to douse the flames (and
this goes for people on both sides of the fence).  In short, I merely
meant to suggest a way to prove that the so-called ballyhoo was
worthwhile (not that this is the only way to prove it).
--Guy

∂29-Jan-82  1204	Guy.Steele at CMU-10A 	Re: eql => eq?   
Date: 29 January 1982 1452-EST (Friday)
From: Guy.Steele at CMU-10A
To: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject:  Re: eql => eq?
CC: common-lisp at SU-AI
In-Reply-To:  HEDRICK@RUTGERS's message of 29 Jan 82 13:54-EST
Message-Id: <29Jan82 145243 GS70@CMU-10A>

(DEFUN FOO (X)
  (SETQ BIG-LIST (CONS X BIG-LIST))
  (SETQ SMALL-LIST (CONS X SMALL-LIST))
  (PRINT (EQ (CAR BIG-LIST) (CAR SMALL-LIST))))

(DEFUN BAR (Z) (FOO (*$ Z 2.0)))

Compile this using the MacLISP compiler.  Then (BAR 3.0) reliably
prints NIL, not T.  The reason is that the compiled code for FOO
gets, as its argument X, a pdl number passed to it by BAR.  The code
for FOO happens to choose to make two distinct heap copies of X,
rather than one, and so the cars of the two lists will contain
distinct pointers.
--Guy

∂29-Jan-82  1225	George J. Carrette <GJC at MIT-MC> 	MACSYMA timing
Date: 29 January 1982 15:23-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  MACSYMA timing
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI

All I said was that Macsyma was running, and I felt I had to
do that because many people thought that NIL was not a working
language. I get all sorts of heckling from certain people anyway,
so a few extra unsupported pot-shots aren't going to bother me.
Also, I have limited time now to complete a paper on the timing
figures that JM wants me to submit to the conference on lisp
and applicable languages, taking place at CMU right? So you
get the picture.

But, OK, I'll give two timing figures, VAX-780 speed in % of KL-10.

Compiling "M:MAXII;NPARSE >"   48% of KL-10.
INTEGRATE(1/(X↑3-1),X)         12% of KL-10.

Obviously the compiler is the most-used program in NIL, so it has been tuned.
Macsyma has not been tuned.

Note well, I say "Macsyma has not been tuned" not "NIL has not been tuned."
Why? Because NIL has been tuned, lots of design thought by many people,
and lots of work by RWK and RLB to provide fast lisp primitives in the VAX.
It is Macsyma which needs to be tuned for NIL. This may not be very
interesting! Purely source-level hacks. For example, the Franz people
maintain entirely seperate versions of large (multi-page)
functions from the core of Macsyma for the purpose
of making Macsyma run fast in Franz.
=> There is nothing wrong with this when it is worth the time saved
   in solving the user's problems. I think for Macsyma it is worth it. <=

The LISPM didn't need special hacks though. This is interesting,
I think...

-gjc

∂29-Jan-82  1324	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re:  Re: eql => eq?  
Date: 29 Jan 1982 1620-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re:  Re: eql => eq?
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI
In-Reply-To: Your message of 29-Jan-82 1452-EST

I call that a bug.
-------

∂29-Jan-82  1332	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re:  Re: eql => eq?  
Date: 29 Jan 1982 1627-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re:  Re: eql => eq?
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI
In-Reply-To: Your message of 29-Jan-82 1452-EST

I seem to recall that it was a basic property of Lisp that
  (EQ X (CAR (CONS X Y)))
If your compiler compiles code that does not preserve this property,
the kindest thing I have to say is that it is premature optimization.
-------

∂29-Jan-82  1336	Guy.Steele at CMU-10A 	Re: Re: eql => eq?    
Date: 29 January 1982 1630-EST (Friday)
From: Guy.Steele at CMU-10A
To: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject:  Re: Re: eql => eq?
CC: common-lisp at SU-AI
In-Reply-To:  HEDRICK@RUTGERS's message of 29 Jan 82 16:20-EST
Message-Id: <29Jan82 163020 GS70@CMU-10A>

Well, it is at least a misfeature that SETQ and lambda-binding
do not preserve EQ-ness.  It is precisely for this reason that
the predicate EQL was proposed: this is the strongest equivalence
relation on S-expressions which is preserved by SETQ and binding.
Notice that this definition is in terms of user-level semantics
rather than implementation technique.
It certainly was a great feature that user semantics and implementation
coincided and had simple definitions in EQ in the original LISP.
MacLISP was nudged from this by the great efficiency gains to be had
for numerical code, and it didn't bother too many users.
The Swiss Cheese draft of the Common LISP manual does at least make
all this explicit: see the first page of the Numbers chapter.  The
disclaimer is poorly stated (my fault), but it is there for the nonce.
--Guy

∂29-Jan-82  1654	Richard M. Stallman <RMS at MIT-AI> 	Trying to implement FPOSITION with LAMBDA-MACROs.    
Date: 29 January 1982 19:46-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: Trying to implement FPOSITION with LAMBDA-MACROs.
To: HIC at MIT-AI, common-lisp at SU-AI

LAMBDA-MACRO is a good hack but is not exactly what JONL was suggesting.

The idea of FPOSITION is that ((FPOSITION X Y) MORE ARGS)
expands into (FPOSITION-INTERNAL X Y MORE ARGS), and
((FPOSITION) MORE ARGS) into (FPOSITION-INTERNAL NIL NIL MORE ARGS).
In JONL's suggestion, the expander for FPOSITION operates on the
entire form in which the call to the FPOSITION-list appears, not
just to the FPOSITION-list.  This allows FPOSITION to be handled
straightforwardly; but also causes trouble with (FUNCTION (FPOSITION
...)) where lambda-macros automatically work properly.

It is possible to define FPOSITION using lambda-macros by making
(FPOSITION X Y) expand into 
(LAMBDA (&REST ARGS) (FUNCALL* 'FPOSITION-INTERNAL X Y ARGS))
but this does make worse code when used in an internal lambda.
It would also be possible to use an analogous SUBST function
but first SUBST functions have to be made to work with &REST args.
I think I can do this, but are SUBST functions in Common Lisp?

∂29-Jan-82  2149	Kim.fateman at Berkeley 	Okay, you hackers   
Date: 29 Jan 1982 20:31:23-PST
From: Kim.fateman at Berkeley
To: guy.steele@cmu-10a
Subject: Okay, you hackers
Cc: common-lisp@SU-AI

I think that when GJC says that NIL/Macsyma runs the "X" demo, it
is kind of like the dog that plays checkers.  It is
remarkable, not for how well it plays, but for the fact that it plays at all.

(And I believe it is creditable [if] NIL runs Macsyma at all... I
know how hard it is, so don't get me wrong..)
Anyway, the stardard timings we have had in the past, updated somewhat:

MC-Macsyma, Vaxima and Lisp Machine timings for DEMO files
(fg genral, fg rats, gen demo, begin demo)
(garbage collection times excluded.)  An earlier version of this
table was prepared and distributed in April, 1980.  The only
column I have changed is the 2nd one.

MC Time	     VAXIMA    	128K lispm     192K lispm       256K lispm
4.119	   11.8   sec.  43.333 sec.     19.183 sec.    16.483 sec.  
2.639	    8.55  sec.  55.916 sec.     16.416 sec.    13.950 sec. 
3.141	   14.3   sec. 231.516 sec.     94.933 sec.    58.166 sec.  
4.251	   13.1   sec. 306.350 sec.    125.666 sec.    90.716 sec. 


(Berkeley VAX 11/780 UNIX (Kim) Jan 29, 1982,  KL-10 MIT-MC ITS April 9, 1980.)
Kim has no FPA, and 2.5meg of memory.  Actually, 2 of these times are
slower than in 1980, 2 are faster. 

Of course, GJC could run these at MIT on his Franz/Vaxima/Unix system, and
then bring up his NIL/VMS system and time them again.

∂29-Jan-82  2235	HIC at SCRC-TENEX 	Trying to implement FPOSITION with LAMBDA-MACROs.  
Date: Friday, 29 January 1982  22:13-EST
From: HIC at SCRC-TENEX
To:   Richard M. Stallman <RMS at MIT-AI>
Cc:   common-lisp at SU-AI
Subject: Trying to implement FPOSITION with LAMBDA-MACROs.

    Date: Friday, 29 January 1982  19:46-EST
    From: Richard M. Stallman <RMS at MIT-AI>
    To:   HIC at MIT-AI, common-lisp at SU-AI
    Re:   Trying to implement FPOSITION with LAMBDA-MACROs.

    LAMBDA-MACRO is a good hack but is not exactly what JONL was suggesting.
Yes, I know.  I think it's the right thing, however.

    The idea of FPOSITION is that ((FPOSITION X Y) MORE ARGS)
    expands into (FPOSITION-INTERNAL X Y MORE ARGS), and
    ((FPOSITION) MORE ARGS) into (FPOSITION-INTERNAL NIL NIL MORE ARGS).
    In JONL's suggestion, the expander for FPOSITION operates on the
    entire form in which the call to the FPOSITION-list appears, not
    just to the FPOSITION-list.  This allows FPOSITION to be handled
    straightforwardly; but also causes trouble with (FUNCTION (FPOSITION
    ...)) where lambda-macros automatically work properly.
Yes, that's right.  If you don't care about #'(FPOSITION ..), then you can have
the lambda macro expand into a real macro which can see the form, so you
can use lambda macros to simulate JONL's behavior quite easily.

    It is possible to define FPOSITION using lambda-macros by making
    (FPOSITION X Y) expand into 
    (LAMBDA (&REST ARGS) (FUNCALL* 'FPOSITION-INTERNAL X Y ARGS))
    but this does make worse code when used in an internal lambda.
    It would also be possible to use an analogous SUBST function
    but first SUBST functions have to be made to work with &REST args.
    I think I can do this, but are SUBST functions in Common Lisp?
Yes, this is what I had in mind.  The fact that this makes worse code
whe used as an internal lambda is a bug in the compiler, not an
intrinisic fact of Common-Lisp or of the Lisp Machine.  However, it would
be ok if subst's worked with &REST args too.

∂30-Jan-82  0006	MOON at SCRC-TENEX 	Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs 
Date: Saturday, 30 January 1982  03:00-EST
From: MOON at SCRC-TENEX
To:   Richard M. Stallman <RMS at MIT-AI>
Cc:   common-lisp at SU-AI
Subject: Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs

If SUBSTs aren't in Common Lisp, they certainly should be.  They are
extremely useful and trivial to implement.

∂30-Jan-82  0431	Kent M. Pitman <KMP at MIT-MC> 	Those two little suggestions for macroexpansion 
Date: 30 January 1982 07:26-EST
From: Kent M. Pitman <KMP at MIT-MC>
Subject:  Those two little suggestions for macroexpansion
To: Fahlman at CMU-20C
cc: LISP-FORUM at MIT-MC

    Date: 28 Jan 1982 1921-EST
    From: Fahlman at CMU-20C

    JONL's suggestion looks pretty good to me...
-----
Actually, JONL was just repeating suggestions brought up by GLS and EAK just
over a year ago on LISP-FORUM. I argued then that the recursive EVAL call was
semantically all wrong and not possible to support compatibly between the 
interpreter and compiler ... I won't bore you with a repeat of that discussion.
If you've forgotten it and are interested, it's most easily gettable from the
file "MC: AR1: LSPMAIL; FMACRO >".

∂30-Jan-82  1234	Eric Benson <BENSON at UTAH-20> 	Re: MVLet   
Date: 30 Jan 1982 1332-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: Re: MVLet
To: Common-Lisp at SU-AI

Regarding return of multiple values: "...their lack has been a traditional
weakness in Lisp..."  What other languages have this feature?  Many have
call-by-reference which allows essentially the same functionality, but I
don't know of any which have multiple value returns in anything like the
Common Lisp sense.

I can certainly see the benefit of including them, but the restrictions
placed on them and the dismal syntax for using them counteracts the
intention of their inclusion, namely to increase the clarity of those
functions that have more than one value of interest.  If we were using a
graphical dataflow language they would fit like a glove, without all the
fuss.  The problem arises because each arrangement of arcs passing values
requires either its own special construct or binding the values to
variables.  I'm not suggesting we should throw out the n-in, 1-out nature
of Lisp forms in favor of an n-in, m-out arrangement, (at least not right
now!) rather that the current discussion of multiple values is unlikely to
come to a satisfactory conclusion due to the "tacked-on afterthought"
nature of the current version.  We may feel that it is a useful enough
facility to keep in spite of all this, but it's probably too much to hope
to "do it right".
-------

∂30-Jan-82  1351	RPG  	MVlet    
To:   common-lisp at SU-AI  
Of course, if Scott is only worried about the difficulty of implementing
the full MVlet with hairy syntax, all one has to do is provide MV-LIST
as Dan notes and write MVlet as a simple macro using that and LAMBDA.
That way CONSes, but who said that it had to be implemented well?
				-rpg-

∂30-Jan-82  1405	Jon L White <JONL at MIT-MC> 	Comparison of "lambda-macros" and my "Two little suggestions ..."
Date: 30 January 1982 16:55-EST
From: Jon L White <JONL at MIT-MC>
Subject: Comparison of "lambda-macros" and my "Two little suggestions ..."
To: KMP at MIT-MC, hic at SCRC-TENEX
cc: LISP-FORUM at MIT-MC, common-lisp at SU-AI

[Apologies for double mailings -- could we agree on a name for a
 mailing list to be kept at SU-AI which would just be those 
 individuals in COMMON-LISP@SU-AI which are not also on LISP-FORUM@MC]

There were two suggestions in my note, and lambda-macros relate
to only one of then, namely the first one

    FIRST SUGGESTION:
	 In the context of ((<something> . . .) a1 a2),  have EVAL macroexpand 
     the part (<something> . . .) and "try again" before recursively 
     evaluating it. This will have the incompatible effect that
	(defmacro foo () 'LIST)
	((foo) 1 2)
     no longer causes an error (unbound variable for LIST), but will rather
     first expand into (list 1 2), which then evaluates to (1 2).

Note that for clarity, I've added the phrase "try again", meaning to
look at the form as see if it is recognized explicitly as, say, some
special form, or some subr application.

The discussion from last year, which resulted in the name "lambda-macros"
centered around finding a separate (but equal?) mechanism for code-expansion
for non-atomic forms which appear in a function place;  my first suggestion 
is to change EVAL (and compiler if necessary) to call the regular macroexpander
on any form which looks like some kind of function composition, and thus
implement a notion of "Meta-Composition" which is context free.  It would be 
a logical consequence of this notion that eval'ing (FUNCTION (FROTZ 1)) must
first macroexpand (FROTZ 1), so that #'(FPOSITION ...) could work in the 
contexts cited about MAP.  However, it is my second suggestion that would
not work in the context of an APPLY -- it is intended only for the EVAL-
of-a-form context -- and I'm not sure if that has been fully appreciated
since only RMS appears to have alluded to it.

However, I'd like to offer some commentary on why context-free 
"meta-composition" is good for eval, yet why context-free "evaluation" 
is bad:
  1) Context-free "evaluation" is SCHEME.  SCHEME is not bad, but it is
     not LISP either.  For the present, I believe the LISP community wants
     to be able to write functions like:
	(DEFUN SEMI-SORT (LIST)
	  (IF (GREATERP (FIRST LIST) (SECOND LIST))
	      LIST 
	      (LIST (SECOND LIST) (FIRST LIST))))
     Correct interpretation of the last line means doing (FSYMEVAL 'LIST)
     for the instance of LIST in the "function" position, but doing (more
     or less) (SYMEVAL 'LIST) for the others -- i.e., EVAL acts differently
     depending upon whether the context is "function" or "expression-value".
 2) Context-free "Meta-composition" is just source-code re-writing, and
    there is no ambiguity of reference such as occured with "LIST" in the 
    above example.  Take this example:
	(DEFMACRO GET-SI (STRING)
	  (SETQ STRING (TO-STRING STRING))
	  (INTERN STRING 'SI))
        (DEFUN SEE-IF-NEW-ATOM-LIST (LIST)
	  ((GET-SI "LIST")  LIST  (GET-SI "LIST")))
    Note that the context for (GET-SI "LIST") doesn't matter (sure, there
    are other ways to write equivalent code but . . .)
    Even the following macro definition for GET-SI results in perfectly
    good, unambiguous results:
	(DEFMACRO GET-SI (STRING)
	  `(LAMBDA (X Y) (,(intern (to-string string) 'SI) X Y)))
    For example, assuming that (LAMBDA ...) => #'(LAMBDA ...),
      (SEE-IF-NEW-ATOM-LIST 35)   =>   (35  #'(LAMBDA (X Y) (LIST X Y)))

The latter (bletcherous) example shows a case where a user ** perhaps **
did not intend to use (GET-SI...) anywhere but in function context --
he simply put in some buggy code.   The lambda-macro mechanism would require
a user to state unequivocally that a macro-defintion in precisely one
context;  I'd rather not be encumbered with separate-but-parallel machinery
and documentation -- why not have this sort of restriction on macro usage
contexts be some kind of optional declaration?

Yet my second suggestion involves a form which could not at all be interpreted
in "expression-value" context:
    SECOND SUGGESTION
	Let FMACRO have special significance for macroexpansion in the context
     ((FMACRO . <fun>) . . .), such that this form is a macro call which is
     expanded by calling <fun> on the whole form.
Thus (LIST 3 (FMACRO . <fun>)) would cause an error.  I believe this 
restriction is more akin to that which prevents MACROs from working
with APPLY.

∂30-Jan-82  1446	Jon L White <JONL at MIT-MC> 	The format ((MACRO . f) ...)  
Date: 30 January 1982 17:39-EST
From: Jon L White <JONL at MIT-MC>
Subject: The format ((MACRO . f) ...)
To: common-lisp at SU-AI
cc: LISP-FORUM at MIT-MC


HIC has pointed out that the LISPM interpreter already treats the
format ((MACRO . f) ...) according to my "second suggestion" for
((FMACRO . f) ..);  although I couldn't find this noted in the current
manual, it does work.   I'd be just as happy with ((MACRO . f) ...)  -- my 
only consideration was to avoid a perhaps already used format.  Although the 
LISPM compiler currently barfs on this format, I believe there will be a 
change soon?

The issue of parallel macro formats -- lambda-macros versus
only context-free macros -- is quite independent; although I
have a preference, I'd be happy with either one.

∂30-Jan-82  1742	Fahlman at CMU-20C 	Re: MVlet      
Date: 30 Jan 1982 2039-EST
From: Fahlman at CMU-20C
Subject: Re: MVlet    
To: RPG at SU-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 30-Jan-82 1651-EST


But why chose a form that is hard to implement well and that will
therefore be implemented poorly over one that is easy to implement well?
If we are going to CONS, we may as well throw the MV stuff out
altogether.  Even if implementation weere not a problem, I would prefer
the simple syntax.  Does anyone else out there share RPG's view that
the alleged uniformity of the hairy syntax justifies the hair?
-- Scott
-------

∂30-Jan-82  1807	RPG  	MVlet    
To:   common-lisp at SU-AI  
1. What is that hard to implement about the MVlet thing that is not
already swamped by the difficulty of having n values on the stack
as you return and throw, and is also largely subsumed by the theory
of function entry?

2. To get any variable number of values back now you have to CONS anyway,
so by implementing it `poorly' for the user, but with 
a uniform syntax for all, is better than the user implementing
it poorly himself over and over.

3. If efficiency of the implementation is the issue, and if the
simple cases admit efficiency in the old syntax, the same simple 
cases admit efficiency in the proposed syntax.

4. Here's what happens when a function is called:
	You have a description of the variables and how the
	values that you get will be bound to them depending on how many you get.

  Here's what happens when a function with multiple values returns to
a MVlet:
	You have a description of the variables and how the
	values that you get will be bound to them depending on how many you get.

Because the naive user will think these descriptions are similar, he will expect
that the syntax to deal with them is similar.

∂30-Jan-82  1935	Guy.Steele at CMU-10A 	Forwarded message
Date: 30 January 1982 2231-EST (Saturday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Forwarded message
CC: feinberg at CMU-20C
Message-Id: <30Jan82 223157 GS70@CMU-10A>


- - - - Begin forwarded message - - - -
Date: 30 January 1982  21:43-EST (Saturday)
From: FEINBERG at CMU-20C
To:   Guy.Steele at CMUA
Subject: Giving in to Maclisp
Via:     CMU-20C; 30 Jan 1982 2149-EST

Howdy!
	I was looking through Decisions.Press and I came upon a 
little section, which I was surprised to see:


        Adopt functions parallel to GETF, PUTF, and REMF, to be
        called GETPR, PUTPR, and REMPR, which operate on symbols.
        These are analogous to GET, PUTPROP, and REMPROP of
        MACLISP, but the arguments to PUTPR are in corrected order.
        (It was agreed that GETPROP, PUTPROP, and REMPROP would be
        better names, but that these should not be used to minimize
        compatibility problems.)

Are we really going to give all the good names away to Maclisp in the
name of "compatibility"?  Compatibility in what way? Is it not clear
that we will have to do extensive modifications to Maclisp to get
Common Lisp running in it anyway? Is it also not clear that Maclisp
programs will also require extensive transformation to run in Common
Lisp? Didn't everyone agree that comming up with a clean language,
even at the expense of compatibility, was most important? I think it
is crucial that we break away from Maclisp braindammage, and not let
it steal good names in the process.  PUTPR is pretty meaningless,
whereas PUTPROP is far more clear.  

						--Chiron
- - - - End forwarded message - - - -

∂30-Jan-82  1952	Fahlman at CMU-20C 	Re: MVlet      
Date: 30 Jan 1982 2244-EST
From: Fahlman at CMU-20C
Subject: Re: MVlet    
To: RPG at SU-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 30-Jan-82 2107-EST


    1. What is that hard to implement about the MVlet thing that is not
    already swamped by the difficulty of having n values on the stack
    as you return and throw, and is also largely subsumed by the theory
    of function entry?

Function calling with hairy lambda syntax was an incredible pain to
implement decently, but was worth it.  Having multiple values on the
stack was also a pain to implement, but was also (just barely) worth it.
The proposed M-V-CALL just splices together these two moby pieces of
machinery, so is relatively painless.  In the implementations I am
doing, at least, the proposed lambda-list syntax for the other MV forms
will require a third moby chunk of machinery since it has to do what a
function call does, but it cannot be implemented as a function call
since it differs slightly.

    2. To get any variable number of values back now you have to CONS anyway,
    so by implementing it `poorly' for the user, but with 
    a uniform syntax for all, is better than the user implementing
    it poorly himself over and over.

Neither the simple MV forms nor M-V-CALL would cons in my
implementations, except in the case that the functional arg to M-V-CALL
takes a rest arg and there is at least one rest value passed to it.  To
go through M-V-LIST routinely would cons much more, and would make the
multiple value mechanism totally worthless.

    3. If efficiency of the implementation is the issue, and if the
    simple cases admit efficiency in the old syntax, the same simple 
    cases admit efficiency in the proposed syntax.

Yup, it can be implemented efficiently.  My objection is that it's a lot
of extra work (I figure it would take me a full week) and would make the
language uglier as well (in the eye of this beholder).

    4. Here's what happens when a function is called:
    	You have a description of the variables and how the
    	values that you get will be bound to them depending on how many
        you get.

      Here's what happens when a function with multiple values returns to
      a MVlet:
    	You have a description of the variables and how the
    	values that you get will be bound to them depending
        on how many you get.

Here's what really happens:

You know exactly how many values the called form is going to return and
what each value is.  Some of these you want, some you don't.  You
arrange to catch and bind those that are of interest, ignoring the rest.
Defaults and rest args simply aren't meaningful if you know how many
values are coming back.

In the rare case of a called form that is returning an unpredictable
number of args (the case that RPG erroneously takes as typical), you use
M-V-CALL and get the full lambda binding machinery, or you use M-V-LIST
and grovel the args yourself, or you let the called form return a list
in the first place.  I would guess that such unpredictable cases occur
in less than 1% of all multiple-value calls, and the above-listed
mechanisms handle that 1% quite well.

OK, we need to settle this.  If most of the rest of you share RPG's
taste in this, I will shut up and do the extra work to implement the
lambda forms, rather than walk out.  If RPG is alone or nearly alone in
his view of what is tasteful, I would hope that he would give in
gracefully.  I assume that punting multiples altogether or limiting them
to two values would please no one.

-- Scott
-------

∂30-Jan-82  2002	Fahlman at CMU-20C 	GETPR
Date: 30 Jan 1982 2256-EST
From: Fahlman at CMU-20C
Subject: GETPR
To: feinberg at CMU-20C
cc: common-lisp at SU-AI


I think that Feinberg underestimates the value of retaining Maclisp
compatibility in commonly-used functions, other things being equal.

On the other hand, I agree that GETPR and friends are pretty ugly.  If I
understand the proposal, GETPR is identical to the present GET, and
REMPR is identical to REMPROP.  Only PUTPR is different.  How about
going with GET, REMPROP, and PUT in new code, where PUT is like PUTPROP,
but with the new argument order?  Then PUTPROP could be phased out
gradually, with a minimum of hassle.  (Instead of PUT we could use
SETPROP, but I like PUT better.)

-- Scott
-------

∂30-Jan-82  2201	Richard M. Stallman <RMS at MIT-AI>
Date: 31 January 1982 00:57-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

I vote for GET and PUT rather than GETPR and PUTPR.

Fahlman is not alone in thinking that it is cleaner not to
have M-V forms that contain &-keywords.

∂31-Jan-82  1116	Daniel L. Weinreb <dlw at MIT-AI> 	GETPR
Date: Sunday, 31 January 1982, 14:15-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: GETPR
To: Fahlman at CMU-20C, feinberg at CMU-20C
Cc: common-lisp at SU-AI

Would you please go back and read the message I sent a little while ago?
I belive that it makes more sense to FIRST define a policy about Maclisp
compatibility and THEN make the specific decisions based on that
proposal.  I don't want to waste time thinking about the GET thing before
we have such a policy.

∂01-Feb-82  0752	Jon L White <JONL at MIT-MC> 	Incredible co-incidence about the format ((MACRO . f) ...)  
Date: 1 February 1982 10:47-EST
From: Jon L White <JONL at MIT-MC>
Subject: Incredible co-incidence about the format ((MACRO . f) ...)
To: common-lisp at SU-AI
cc: LISP-FORUM at MIT-MC


One of my previous messages seemed to imply that ((MACRO . f) ...)
on the LISPM fulfills the intent of my second suggestion -- apparently
there is a completely unforseen consequence of the fact that
   (FSYMEVAL 'FOO) => (MACRO . <foofun>)
when FOO is defined as a macro, such that the interpreter "makes it work".
However, MACROEXPAND knows nothing about this format, which is probably
why the compiler can't handle it; also such action isn't documented
anywhere.
 
Thus I believe it to be merely an accidental co-incidence that the
interpreter does anything at all meaningful with this format.   My
"second suggestion" now is to institutionalize this "accident"; it
certainly would make it easier to experiment with a pseudo-functional
programming style, and it obviously hasn't been used for any other
meaning.

∂01-Feb-82  0939	HIC at SCRC-TENEX 	Incredible co-incidence about the format ((MACRO . f) ...)   
Date: Monday, 1 February 1982  11:38-EST
From: HIC at SCRC-TENEX
To:   Jon L White <JONL at MIT-MC>
Cc:   common-lisp at SU-AI, LISP-FORUM at MIT-MC
Subject: Incredible co-incidence about the format ((MACRO . f) ...)

    Date: Monday, 1 February 1982  10:47-EST
    From: Jon L White <JONL at MIT-MC>
    To:   common-lisp at SU-AI
    cc:   LISP-FORUM at MIT-MC
    Re:   Incredible co-incidence about the format ((MACRO . f) ...)

    One of my previous messages seemed to imply that ((MACRO . f) ...)
    on the LISPM fulfills the intent of my second suggestion -- apparently
    there is a completely unforseen consequence of the fact that
       (FSYMEVAL 'FOO) => (MACRO . <foofun>)
    when FOO is defined as a macro, such that the interpreter "makes it work".
    However, MACROEXPAND knows nothing about this format, which is probably
    why the compiler can't handle it; also such action isn't documented
    anywhere.

Of course MACROEXPAND knows about it (but not the version you looked
at).  I discovered this BUG (yes, BUG, I admit it, the LISPM had a
bug) in about 2 minutes of testing this feature, after I told the
world I thought it would work, and fixed it in about another two
minutes.
     
    Thus I believe it to be merely an accidental co-incidence that the
    interpreter does anything at all meaningful with this format.   My
    "second suggestion" now is to institutionalize this "accident"; it
    certainly would make it easier to experiment with a pseudo-functional
    programming style, and it obviously hasn't been used for any other
    meaning.

JONL, you seem very eager to make this be your proposal -- so be it.
I don't care.  However, it works on the Lisp Machine (it was a BUG
when it didn't work) to have (MACRO . foo) in the CAR of a form, and
thus it works to have a lambda macro expand into this.

Of course, Lambda Macros are the right way to experiment with the
functional programming style -- I think it's wrong to rely on seeing
the whole form (I almost KNOW it's wrong...).  In any case, the Lisp
Machine now has these.

∂01-Feb-82  1014	Kim.fateman at Berkeley 	GETPR and compatibility  
Date: 1 Feb 1982 10:11:13-PST
From: Kim.fateman at Berkeley
To: common-lisp@su-ai
Subject: GETPR and compatibility

There are (at least) two kinds of compatibility worth comparing.

1. One, which I believe is very hard to do,
probably not worthwhile, and probably not
in the line of CL, is the kind which
would allow one to take an arbitrary maclisp (say) file, read it into
a CL implementation, and run it, without ever even telling the CL
system, hey, this file is maclisp.  And when you prettyprint or debug one of
those functions, it looks pretty much like what you read in, and did
not suffer "macro←replacement←itis".

2. The second type is to put in the file, or establish somehow,
#.(enter maclisp←mode)  ;; or whatever, followed by 
<random maclisp stuff>
#.(enter common←lisp←mode)  ;; etc.

The reader/evaluator would know about maclisp. There
are (at least) two ways of handling this 
  a:  any maclisp construct (e.g. get) would be macro-replaced by
the corresponding CL thing (e.g. getprop or whatever); arguments would
be reordered as necessary.  I think transor does this, thought generally
in the direction non-interlisp ==> interlisp.  The original maclisp
would be hard to examine from within CL, since it was destroyed on read-in
(by read, eval or whatever made the changes). (Examination by looking
at the file or some verbatim copy would be possible).  This makes
debugging in native maclisp, hard.
  b: wrap around each uniquely maclisp construction (perhaps invisibly) 
(evaluate←as←maclisp  <whatever>).  This would preserve prettyprinting,
and other things.  Functions which behave identically would presumably
not need such a wrapper, though interactions would be hard to manage.

I think 2a is what makes most sense, and is how Franz lisp 
handles some things which are, for example, in interlisp, but not in Franz.
The presumption is that you would take an interlisp (or maclisp)
file and translate it into CL, and at that point abandon the original
dialect.  In view of this, re-using the names seems quite possible,
once the conversion is done.
  In point of fact, what some people may do is handle CL this way.
That is, translate it into  another dialect, which, for whatever
reason, seems more appropriate.  Thus, an Xlisp chauvinist
might simply write an Xlispifier for CL. The Xlispifier for CL
would be written in Xlisp, and consist of the translation package
and (probably) a support package of CL functions.  Depending on
whether you are in CL-reading-mode or XL-reading-mode, you would
get one or the other "getprop".
  Are such "implementations of CL"  "correct"?  Come to think of
it, how would one determine if one is looking at an implementation
of CL?

∂01-Feb-82  1034	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	a proposal about compatibility 
Date:  1 Feb 1982 1326-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: a proposal about compatibility
To: common-lisp at SU-AI

I would like to propose that CL be a dialect of Lisp.  A reasonable
definition of Lisp seems to be the following:
  - all functions defined in the "Lisp 1.5 Programmer's Manual",
	McCarthy, et. al, 1962, other than those that are system- or
	implementation-dependent 
  - all functions on whose definitions Maclisp and Interlisp agree
I propose that CL should not redefine any names from these two sets,
except in ways that are upwards-compatible.
-------

∂01-Feb-82  1039	Daniel L. Weinreb <DLW at MIT-AI> 	Re: MVLet      
Date: 1 February 1982 13:32-EST
From: Daniel L. Weinreb <DLW at MIT-AI>
Subject: Re: MVLet    
To: common-lisp at SU-AI

    Regarding return of multiple values: "...their lack has been a traditional
    weakness in Lisp..."  What other languages have this feature?  Many have
    call-by-reference which allows essentially the same functionality, but I
    don't know of any which have multiple value returns in anything like the
    Common Lisp sense.
Many of them have call-by-reference, which allows essentially the same
functionality.  Indeed, few of them have multiple value returns in the
Lisp sense, although the general idea is around, and was included in at
least some of the proposals for "DOD-1" (it's sometimes called "val out"
paramters.) Lisp is neither call-by-value or call-by-reference exactly,
so a direct comparision is difficult.  My point was that there is a
pretty good way to return many things in the call-by-reference pardigm,
it is used to good advantage by Pascal and PL/1 programs, and Lisp
programmer who want to do analogous things have traditionally been up
the creek.

    We may feel that it is a useful enough facility to keep in spite of all
    this, but it's probably too much to hope to "do it right".
When we added multiple values to the Lisp Machine years ago, we decided that
we couldn't "do it right", but it was a useful enough facility to keep in
spite of all this.  I still think so, and it applies to Common Lisp for the
same reasons.

∂01-Feb-82  2315	Earl A. Killian <EAK at MIT-MC> 	Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs   
Date: 1 February 1982 19:09-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs
To: MOON at SCRC-TENEX
cc: common-lisp at SU-AI

I don't want SUBSTs in Common Lisp, I want the real thing, ie.
inline functions.  They can be implemented easily in any
implementation by replacing the function name with its lambda
expression (this isn't quite true, because of free variables, but
that's not really that hard to deal with in a compiler).  Now the
issue is simply efficiency.  Since Common Lisp has routinely
chosen cleanliness when efficiency can be dealt with by the
compiler (as it is in the S-1 compiler), then I see no reason to
have ugly SUBSTs.

∂01-Feb-82  2315	FEINBERG at CMU-20C 	Compatibility With Maclisp   
Date: 1 February 1982  16:35-EST (Monday)
From: FEINBERG at CMU-20C
To:   Daniel L. Weinreb <dlw at MIT-AI>
Cc:   common-lisp at SU-AI, Fahlman at CMU-20C
Subject: Compatibility With Maclisp

Howdy!
	I agree with you, we must have a consistent policy concerning
maintaining compatibility with Maclisp.  I propose that Common Lisp
learn from the mistakes of Maclisp, not repeat them.  This policy
means that Common Lisp is free to use clear and meaningful names for
its functions, even if they conflict with Maclisp function names.
Yes, some names must be kept for historical purposes (CAR, CDR and
CONS to name a few), but my view of Common Lisp is that it is in fact
a new language, and should not be constrained to live in the #+MACLISP
world.  I think if Common Lisp software becomes useful enough, PDP-10
people will either make a Common Lisp implementation, they will make a
mechanical translator, or they will retrofit Maclisp to run Common
Lisp.  Common Lisp should either be upward compatible Maclisp or
compatibility should take a back seat to a good language.  I think
Common Lisp has justifiably moved far enough away from Maclisp that
the former can no longer be accomplished, so the latter is the only
reasonable choice.  Being half upward compatible only creates more
confusion.

∂01-Feb-82  2319	Earl A. Killian <EAK at MIT-MC> 	GET/PUT names    
Date: 1 February 1982 19:32-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  GET/PUT names
To: common-lisp at SU-AI

I don't like the name GET for property lists.  GET is a verb, and
therefore doesn't sound very applicative to me.  I prefer Lisp
function names to refer to what they do, not how they do it.
Thus I'd like something like PROPERTY-VALUE, PROPERTY, or just
PROP (depending on how important a short name is) instead of GET.
PUTPROP would be SET-PROPERTY-VALUE, SET-PROPERTY, or SET-PROP,
though I'd personally use SETF instead:
	(SETF (PROP S 'X) Y)

∂01-Feb-82  2319	Howard I. Cannon <HIC at MIT-MC> 	The right way   
Date: 1 February 1982 20:13-EST
From: Howard I. Cannon <HIC at MIT-MC>
Subject:  The right way
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI

    Date: 1 February 1982 1650-EST (Monday)
    From: Guy.Steele at CMU-10A
    To:   HIC at MIT-AI
    cc:   common-lisp at SU-AI
    Re:   The right way

    I think I take slight exception at the remark

        Of course, Lambda Macros are the right way to experiment with the
        functional programming style...

    It may be a right way, but surely not the only one.  It seems to me
    that actually using functions (rather than macros) also leads to a
    functional programming style.  Lambda macros may be faster in some
    implementations for some purposes.  However, they do not fulfill all
    purposes (as has already been noted: (MAPCAR (FPOSITION ...) ...)).

Sigh...it's so easy to be misinterpreted in mail.  Of course, that meant
"Of these two approaches,..."  I'm sorry I wasn't explicit enough.

However, now it's my turn to take "slight exception" (which wasn't so
slight on your part that you didn't bother to send a note):

Have we accepted the Scheme approach of LAMBDA as a "self-evaling" form?
If not, then I don't see why you expect (MAPCAR (FPOSITION ...) ...)
to work where (MAPCAR (LAMBDA ...) ...) wouldn't.  Actually, that's
part of the point of Lambda macros -- they work nicely when flagged
by #'.  If you want functions called, then have the lambda macro
turn into a function call.  I think writing #' is a useful marker and
serves to avoid other crocks in the implementation (e.g. evaling the
car of a form, and using the result as the function.  I thought we
had basically punted that idea a while ago.)

If, however, we do accept (LAMBDA ...) as a valid form that self-evaluates 
(or whatever), then I might propose changing lambda macros to be called
in normal functional position, or just go to the scheme of not distinguishing
between lambda and regular macros.

∂01-Feb-82  2321	Jon L White <JONL at MIT-MC> 	MacLISP name compatibility, and return values of update functions
Date: 1 February 1982 16:26-EST
From: Jon L White <JONL at MIT-MC>
Subject: MacLISP name compatibility, and return values of update functions
To: common-lisp at SU-AI

	
[I meant to CC this to common-lisp earlier -- was just sent to Weinreb.]

    Date: Sunday, 31 January 1982, 14:15-EST
    From: Daniel L. Weinreb <dlw at MIT-AI>
    To: Fahlman at CMU-20C, feinberg at CMU-20C
    Would you please go back and read the message I sent a little while ago?
    I belive that it makes more sense to FIRST define a policy about Maclisp
    compatibility and THEN make the specific decisions based on that
    proposal. . . 
Uh, what msg -- I've looked through my mail file for a modest distance, and
don't seem to find anything in your msgs to common-lisp that this might refer 
to.  I thought we had the general notion of not usurping MacLISP names, unless
EXTREMEMLY good cause could be shown.  For example,
 1) (good cause) The names for type-specific (and "modular") arithmetic 
    were usurped by LISPM/SPICE-LISP for the generic arithmetic  (i.e., 
    "+" instead of "PLUS" for generic, and nothing for modular-fixnum). 
    Although I don't like this incompatibility, I can see the point about 
    using the obvious name for the case that will appear literally tens of
    thousands of times in our code.
 2) (bad cause) LISPM "PRINT" returns a gratuitously-incompatible value.
    There is discussion on this point, with my observation that when it was
    first implemented very few LISPM people were aware of the 1975 change
    to MacLISP (in fact, probalby only Ira Goldstein noticed it at all!)
    Yet no one has offered any estimate of the magnitude of the effects of 
    leaving undefined the value of side-effecting and/or updating functions;  
    presumably SETQ would have a defined value, and RPLACA/D also for 
    backwards compatibity, but what about SETF?
Actually the SETF question introduces the ambiguity of which of the
two possible values to return.  Take for example VSET:  Should (VSET V I X) 
return V, by analogy with RPLACA, or should it return X by analyogy with SETQ? 
Whatever is decided for update functions in general affects SETF in some 
possibly conflicting way.  For this reason alone, RMS's suggestion to have 
SETF be the only updator (except for SETQ and RPLACA/RPLACD ??) makes some 
sense; presumably then we could afford to leave the value of SETF undefined.

∂01-Feb-82  2322	Jon L White <JONL at MIT-MC> 	MVLet hair, and RPG's suggestion   
Date: 1 February 1982 16:36-EST
From: Jon L White <JONL at MIT-MC>
Subject: MVLet hair, and RPG's suggestion
To: common-lisp at SU-AI

    Date: 19 Jan 1982 1551-PST
    From: Dick Gabriel <RPG at SU-AI>
    To:   common-lisp at SU-AI  
    I would like to make the following suggestion regarding the
    strategy for designing Common Lisp. . . .
    We should separate the kernel from the Lisp based portions of the system
    and design the kernel first. Lambda-grovelling, multiple values,
    and basic data structures seem kernel.
    The reason that we should do this is so that the many man-years of effort
    to immplement a Common Lisp can be done in parallel with the design of
    less critical things. 
I'm sure it will be impossible to agree completely on a "kernel", but
some approach like this *must* be taken, or there'll never be any code
written in Common-Lisp at all, much less the code which implements the
various features.  Regarding hairy forms of Multiple-value things, 
I believe I voted to have both forms, because the current LISPM set
is generally useful, even if not completely parallel with Multiple-argument 
syntax; also it is small enough and useful enough to "put it in right now"
and strive for the hairy versions at a later time.
  Couldn't we go on record at least as favoring the style which permits
the duality of concept (i.e., whatever syntax works for receiving multiple
arguments also works for receiving multiple values), but noting that
we can't guarantee anything more that the several LISPM functions for
the next three years?  I'd sure hate to see this become an eclectic
kitchen sink merely because the 5-10 people who will  be involved in
Common-Lisp compiler-writing didn't want to take the day or so apiece
over the next three years to write the value side of the value/argument
receiving code.

∂02-Feb-82  0002	Guy.Steele at CMU-10A 	The right way    
Date:  1 February 1982 1650-EST (Monday)
From: Guy.Steele at CMU-10A
To: HIC at MIT-AI
Subject:  The right way
CC: common-lisp at SU-AI
In-Reply-To:  HIC@SCRC-TENEX's message of 1 Feb 82 11:38-EST
Message-Id: <01Feb82 165054 GS70@CMU-10A>

I think I take slight exception at the remark

    Of course, Lambda Macros are the right way to experiment with the
    functional programming style...

It may be a right way, but surely not the only one.  It seems to me
that actually using functions (rather than macros) also leads to a
functional programming style.  Lambda macros may be faster in some
implementations for some purposes.  However, they do not fulfill all
purposes (as has already been noted: (MAPCAR (FPOSITION ...) ...)).

∂02-Feb-82  0110	Richard M. Stallman <RMS at MIT-AI>
Date: 1 February 1982 17:51-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

It seems that the proposal to use GET and PUT for property functions
is leading to a discussion of whether it is ok to reuse Maclisp
names with different meanings.

Perhaps that topic does need to be discussed, but there is no such
problem with using GET and PUT instead of GETPR and PUTPR.
GET would be compatible with Maclisp (except for disembodied plists),
and PUT is not used in Maclisp.

Let's not get bogged down in wrangling about the bigger issue
of clean definitions vs compatibility with Maclisp as long as we
can solve the individual issues in ways that meet both goals.

∂02-Feb-82  0116	David A. Moon <Moon at SCRC-TENEX at MIT-AI> 	Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs
Date: Monday, 1 February 1982, 23:54-EST
From: David A. Moon <Moon at SCRC-TENEX at MIT-AI>
Subject: Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs
To: Earl A. Killian <EAK at MIT-MC>
Cc: common-lisp at SU-AI
In-reply-to: The message of 1 Feb 82 19:09-EST from Earl A. Killian <EAK at MIT-MC>

    Date: 1 February 1982 19:09-EST
    From: Earl A. Killian <EAK at MIT-MC>
    Subject:  Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs
    To: MOON at SCRC-TENEX
    cc: common-lisp at SU-AI

    I don't want SUBSTs in Common Lisp, I want the real thing, ie.
    inline functions...
In the future I will try to remember, when I suggest that something should
exist in Common Lisp, to say explicitly that it should not have bugs in it.

∂02-Feb-82  1005	Daniel L. Weinreb <DLW at MIT-AI>  
Date: 2 February 1982 12:25-EST
From: Daniel L. Weinreb <DLW at MIT-AI>
To: RMS at MIT-AI
cc: common-lisp at SU-AI

While we may not need to decide about Maclisp compatibility policy for the
particular proposal you discussed, we do need to worry about whether, for
example, we must not rename PUTPROP to PUT even if it is upward-compatible
because some of us might think that "CL is not a dialect of Lisp" if we are
that far off; there might be other proposals about Maclisp compatibility
that would affect the proposal you mention regardless of the
upward-compatibility of the proposal.

But what is much more imporrant is that there are other issues that will be
affected strongly by our policy, and if we put this off now then it will be
a long time indeed before we see a coherent and accepted CL definition.  We
don't have forever; if this takes too long we will all get bored and forget
about it.  Furthermore, if we come up with a policy later, we'll have to go
back and change some earlier decisions, or else decide that the policy
won't really be followed.  I think we have to get this taken care of
immediately.

∂02-Feb-82  1211	Eric Benson <BENSON at UTAH-20> 	Re: MacLISP name compatibility, and return values of update functions   
Date:  2 Feb 1982 1204-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: Re: MacLISP name compatibility, and return values of update functions
To: JONL at MIT-MC, common-lisp at SU-AI
In-Reply-To: Your message of 1-Feb-82 1426-MST

We had a long discussion about SETF here at Utah for our implementation and
decided that RPLACA and RPLACD are really the wrong things to use for this.
Every other SETF-type function returns (depending on how you look at it)
the value of the RHS of the assignment (the second argument) or the updated
value of the LHS (the first argument).  This has been the case in most
languages where the value of an assignment is defined, for variables, array
elements or structure elements.  The correct thing to use for
(SETF (CAR X) Y)
is
(PROGN (RPLACA X Y) (CAR X))
or the equivalent.  It appears that the value of SETF was undefined in
LISPM just because of this one case.  Perhaps it is just more apparent when
one uses Algol syntax, i.e.  CAR(X) := CDR(Y) := Z; that this is the
obvious way to define the value of SETF.
-------

∂02-Feb-82  1304	FEINBERG at CMU-20C 	a proposal about compatibility    
Date: 2 February 1982  15:59-EST (Tuesday)
From: FEINBERG at CMU-20C
To:   HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility), DLW at AI
Cc:   common-lisp at SU-AI
Subject: a proposal about compatibility

Howdy!
	Could you provide some rationale for your proposal? Are you
claiming that it is necessary to include Lisp 1.5 and the intersection
of Maclisp and Interlisp in Common Lisp before it can be truly called
a dialect of Lisp? 

	I agree with DLW, it is rather important to settle the issue
of Maclisp compatibility soon.

∂02-Feb-82  1321	Masinter at PARC-MAXC 	Re: MacLISP name compatibility, and return values of update functions   
Date: 2 Feb 1982 13:20 PST
From: Masinter at PARC-MAXC
Subject: Re: MacLISP name compatibility, and return values of update functions
In-reply-to: BENSON's message of 2 Feb 1982 1204-MST
To: common-lisp at SU-AI

The Interlisp equivalent of SETF, "change", is defined in that way. It turns out
that the translation of (change (CAR X) Y) is (CAR (RPLACA X Y)). The
compiler normally optimizes out extra CAR/CDR's when not in value context.
RPLACA is retained for compatibility.


Larry

∂02-Feb-82  1337	Masinter at PARC-MAXC 	SUBST vs INLINE, consistent compilation   
Date: 2 Feb 1982 13:34 PST
From: Masinter at PARC-MAXC
Subject: SUBST vs INLINE, consistent compilation
To: Common-Lisp@SU-AI
cc: Masinter

I think there is some rationale both for SUBST-type macros and for INLINE.

SUBST macros are quite important for cases where the semantics of
lambda-binding is not wanted, e.g., where (use your favorite syntax):

(DEFSUBST SWAP (X Y)
    (SETQ Y (PROG1 X (SETQ X Y]

This isn't a real example, but the idea is that sometimes a simple substitution
expresses what you want to do more elegantly than the equivalent

(DEFMACRO SWAP X
	\(SETQ ,(CADDR X) (PROG1 ,(CADR X) (SETQ ,(CADR X) ,(CADDR X]

These are definitely not doable with inlines. (I am not entirely sure they can be 
correctly implemented with SUBST-macros either.)

-----------------

There is a more important issue which is being skirted in these various
discussions, and that is the one of consistent compilation: when is it
necessary to recompile a function in order to preserve the equivalence of
semantics of compiled and interpreted code. There are some simple situations
where it is clear:
	The source for the function changed
	The source for some macros used by the function changed

There are other situations where it is not at all clear:
	The function used a macro which accessed a data structure which
	has changed.

Tracing the actual data structures used by a macro is quite difficult. It is not
at all difficult for subst and inline macros, though, because the expansion of
the macro depends only on the macro-body and the body of the macro
invocation.

I think the important issue for Common Lisp is: what is the policy on consistent
compilation?

Larry

∂02-Feb-82  1417	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: a proposal about compatibility  
Date:  2 Feb 1982 1714-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: a proposal about compatibility
To: FEINBERG at CMU-20C
cc: DLW at MIT-AI, common-lisp at SU-AI
In-Reply-To: Your message of 2-Feb-82 1559-EST

This is a response to a couple of requests to justify my comments.  Based
on one of these, I feel it necessary to say that nothing in this message
(nor in the previous one) should be taken to be sarcasm.  I am trying to
speak as directly as possible.  I find it odd when people take me as
being sarcastic when I start with the assumption that CL should be a
dialect of Lisp, and then give what I think is a fairly conservative
explanation of what I think that should mean.  However once I get into
the mode of looking for sarcasm, I see how easy it is to interpret
things that way.  Almost any of the statements I make below could be
taken as sarcasm.  It depends upon what expression you imagine being on
my face.  The rest of this message was typed with a deadpan expression.

I thought what I said was that if CL used a name in the set I mentioned,
that the use should be consistent with the old use.  I didn't say that
CL should in fact implement all of the old functions, although I would
not be opposed to such a suggestion.  But what I actually said was that
CL shouldn't use the old names to mean different things.

As for justification, consider the following points:
  - now and then we might like to transport code from one major family
	to another, i.e. not just Maclisp to CL, etc., but Interlisp to
	CL.  I realize this wouldn't be possible with code of some
	types, but I think at least some of our users do write what I
	would call "vanilla Lisp", i.e. Lisp that uses mostly common
	functions that they expect to be present in any Lisp system.  I
	admit that such transportation is not going to be easy under any
	circumstance and for that reason will not be all that common,
	but we should not make it more complicated than necessary.
  - I would like to be able to teach students Lisp, and then have them
	be able to use what they learned even if they end up using a
	different implementation.  Again, some reorientation is
	obviously going to be needed when they move to another
	implementation, but it would be nice not to have things that
	look like they ought to be the same, and aren't.  Further, it
	would be helpful for there to be enough similarity that we can
	continue to have textbooks describe Lisp.
  - I find myself having to deal with several dialects.  Of course I am
	probably a bit unusual, in that I am supporting users, rather
	than implementing systems.  Presumably most of the users will
	spend their time working on one system.  But I would like for
	the most common functions to do more or less the same thing in
	in all of these systems.
  - Now and then we write papers, journal articles, etc.  It would be
	helpful for these to be readable by people in other Lisp
	communities.
-------

∂02-Feb-82  1539	Richard M. Stallman <RMS at MIT-AI> 	No policy is a good policy  
Date: 2 February 1982 18:22-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: No policy is a good policy
To: Common-lisp at SU-AI

Common Lisp is an attempt to compromise betwen several goals:
cleanliness, utility, efficiency and compatibility both between
implementations and with Maclisp.  On any given issue, it is usually
possible to find a "right" solution which may meet most of these goals
well and meet the others poorly but tolerably.  Which goals have to be
sacrificed are different in each case.

For example, issue A may offer a clean, useful and efficient solution
which is incompatible, but in ways that are tolerable.  The other
solutions might be more compatible but worse in general.  Issue B may
offer a fully upward compatible solution which is very useful and fast
when implemented, which we may believe justifies being messy.  If we
are willing to consider each issue separately and sacrifice different
goals on each, the problem is easy.  But if we decide to make a global
choice of how much incompatibility we want, how much cleanliness we
want, etc., then probably whichever way we decide we will be unable to
use both the best solution for A and the best solution for B.  The
language becomes worse because it has been designed dogmatically.

Essentially the effect of having a global policy is to link issues A
and B, which could otherwise be considered separately.  The combined
problem is much harder than either one.  For example, if someone found a new
analogy between ways of designing the sequence function and ways of
designing read syntaxes for sequences, it might quite likely match
feasible designs for one with problematical designs for the other.
Then two problems which are proving enough work to get agreement on
individually would turn into one completely intractable problem.

It is very important to finish Common Lisp reasonably quickly, if the
effort is to be useful.  The study of philosophy of language design is
a worthy field but a difficulty one.  There are many more years of
work to be done in it.  If we make solving this field part of the plan
for designing Common Lisp, we will not be finished in time to do the
job that Common Lisp was intended for: to enable users of different
Maclisp descendents to write portable programs.

∂02-Feb-82  1926	DILL at CMU-20C 	upward compatibility   
Date:  2 Feb 1982 2225-EST
From: DILL at CMU-20C
Subject: upward compatibility
To: common-lisp at SU-AI

I believe that compatibility with other lisp dialects should be a
consideration in the design of Common Lisp, but it should absolutely have
less priority that considerations of portability, taste, and efficiency.
It is possible that this won't leave a whole lot of room for upward
compatibility.

If Common Lisp manages to be a high-quality, widely implemented common
language, the user community will end up being much larger than that of
any existing lisp dialect.  Imposing misfeatures on those users because
a much smaller community of users has gotten used to those features
doesn't make sense.

I also don't see why it is more important to maintain compatibility with
Maclisp than with other dialects.
-------

∂02-Feb-82  2148	RPG  	MVLet    
To:   common-lisp at SU-AI  
Scott pointed out to me that the MVCall construct can take
a general LAMBDA expression, complete with hairy LAMBDA list
syntax. Thus one can write:

		(MV-CALL #'(LAMBDA (FOO BAR (:REST ZTESCH)) ...)
			 (BAZOLA))

Which is virtually the same as:

	(MVLET (FOO BAR (:REST ZTESCH)) (BAZOLA) ...)

but above MVCall syntax strikes me as superior (using LAMBDA's for
LAMBDA-like things.

Therefore, I will go along with Scott's LISPM syntax + MVCALL.
			-rpg-

∂02-Feb-82  2223	Richard M. Stallman <RMS at MIT-AI>
Date: 3 February 1982 01:06-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI, dill at CMU-20C

The reason it is important to be compatible pretty much
with Maclisp is that that means being compatible with the
existing Lisp machine system, and that is very important
to all the Lisp machine users.  And to Lisp machine
system maintainers too.  It is fine if old Maclisp functions
get dropped from the definition of Common Lisp, and replaced
with cleaner ways of doing things: the Lisp machine can implement
the new way while continuing to support the old one, Common Lisp or no.
But making old Maclisp functions do new things that are fundamentally
incompatible will cause a great deal of trouble.

The purpose of the Common Lisp project was to unify Maclisp dialects.
The narrowness of the purpose is all that gives it a chance of success.
It may be an interesting project to design a totally new Lisp dialect,
but you have no chance of getting this many people to agree on a design
if you remove the constraints.

∂02-Feb-82  2337	David A. Moon <MOON at MIT-MC> 	upward compatibility   
Date: 3 February 1982 02:36-EST
From: David A. Moon <MOON at MIT-MC>
Subject: upward compatibility
To: common-lisp at SU-AI

I agree with RMS (for once).  Common Lisp should be made a good language,
but designing "pie in the sky" will simply result in there never being
a Common Lisp.  This is not a case of the Lisp Machine people being
recalcitrant and attempting to impose their own view of the world, but
simply that there is no chance of this large a group agreeing on anything
if there are no constraints.  I think the Lisp Machine people have already
shown far more tolerance and willingness to compromise than anyone would ever
have the right to expect.

∂03-Feb-82  1622	Earl A. Killian <EAK at MIT-MC> 	SUBST vs INLINE, consistent compilation   
Date: 3 February 1982 19:20-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  SUBST vs INLINE, consistent compilation
To: Masinter at PARC-MAXC
cc: Common-Lisp at SU-AI

In Common Lisp the macro definition of SWAP would be the same of
as your SUBST, except for some commas (i.e. defmacro handles
normal argument lists).  I don't think Common Lisp needs subst
as another way of defining macros.  Inline functions are,
however, useful.

∂04-Feb-82  1513	Jon L White <JONL at MIT-MC> 	"exceptions" possibly based on misconception; and EVAL strikes again  
Date: 4 February 1982 18:04-EST
From: Jon L White <JONL at MIT-MC>
Subject: "exceptions" possibly based on misconception; and EVAL strikes again
To: Hic at SCRC-TENEX, Guy.Steele at CMU-10A
cc: common-lisp at SU-AI


The several "exceptions" just taken about implementing functional programming 
may be in part due to a misconception taken from RMS's remark

    Date: 29 January 1982 19:46-EST
    From: Richard M. Stallman <RMS at MIT-AI>
    Subject: Trying to implement FPOSITION with LAMBDA-MACROs.
    . . . 
    The idea of FPOSITION is that ((FPOSITION X Y) MORE ARGS)
    expands into (FPOSITION-INTERNAL X Y MORE ARGS), and . . . 
    In JONL's suggestion, the expander for FPOSITION operates on the
    entire form in which the call to the FPOSITION-list appears, not
    just to the FPOSITION-list.

This isn't right -- in my suggestion, the expander for FPOSITION would 
operate only on (FPOSITION X Y), which *could* then produce something like 
(MACRO . <another-fun>); and it would be  <another-fun>  which would get 
the "entire form in which the call to the FPOSITION-list appears"

HIC is certainly justified in saying that something is wrong, but it looked
like to me (and maybe Guy) that he was saying alternatives to lambda-macros 
were wrong.  However, this side-diversion into a misconception has detracted 
from the main part of my "first suggestion", namely to fix the misdesign in 
EVAL whereby it totally evaluates a non-atomic function position before trying
any macro-expansion. 

    Date: 1 February 1982 20:13-EST
    From: Howard I. Cannon <HIC at MIT-MC>
    Subject:  The right way
    To: Guy.Steele at CMU-10A
    . . . 
    If, however, we do accept (LAMBDA ...) as a valid form that self-evaluates 
    (or whatever), then I might propose changing lambda macros to be called
    in normal functional position, or just go to the scheme of not 
    distinguishing between lambda and regular macros.

So how about it?  Regardless of the lambda-macro question, or the style
of functional programming, let EVAL take

   ((MUMBLE ...) A1 ... A2)  into  `(,(macroexpand '(MUMBLE ...)) A1 ... A2)

and try its cycle again.  Only after (macroexpand '(MUMBLE ...)) fails to
produce something discernibly a function would the nefarious "evaluation"
come up for consideration.

[P.S. -- this isn't the old (STATUS PUNT) question -- that only applied to
 forms which had, from the beginning, an atomic-function position.]

∂04-Feb-82  2047	Howard I. Cannon <HIC at MIT-MC> 	"exceptions" possibly based on misconception; and EVAL strikes again   
Date: 4 February 1982 23:45-EST
From: Howard I. Cannon <HIC at MIT-MC>
Subject:  "exceptions" possibly based on misconception; and EVAL strikes again
To: JONL at MIT-MC
cc: common-lisp at SU-AI, Guy.Steele at CMU-10A

        If, however, we do accept (LAMBDA ...) as a valid form that self-evaluates 
        (or whatever), then I might propose changing lambda macros to be called
        in normal functional position, or just go to the scheme of not 
        distinguishing between lambda and regular macros.

    So how about it?  Regardless of the lambda-macro question, or the style
    of functional programming, let EVAL take

       ((MUMBLE ...) A1 ... A2)  into  `(,(macroexpand '(MUMBLE ...)) A1 ... A2)

Since, in my first note, I said "If, however, we do accept (LAMBDA ...) as a
valid form that...", and we aren't, I am strenuously against this suggestion.

∂05-Feb-82  0022	Earl A. Killian <EAK at MIT-MC> 	SUBST vs INLINE, consistent compilation   
Date: 3 February 1982 19:20-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  SUBST vs INLINE, consistent compilation
To: Masinter at PARC-MAXC
cc: Common-Lisp at SU-AI

In Common Lisp the macro definition of SWAP would be the same of
as your SUBST, except for some commas (i.e. defmacro handles
normal argument lists).  I don't think Common Lisp needs subst
as another way of defining macros.  Inline functions are,
however, useful.

∂05-Feb-82  2247	Fahlman at CMU-20C 	Maclisp compatibility    
Date:  6 Feb 1982 0141-EST
From: Fahlman at CMU-20C
Subject: Maclisp compatibility
To: common-lisp at SU-AI


I would like to second RMS's views about Maclisp compatibility: there are
many goals to be traded off here, and any rigid set of guidelines is
going to do more harm than good.  Early in the effort the following
general principles were agreed upon by those working on Common Lisp at
the time:

1. Common Lisp will not be a strict superset of Maclisp.  There are some
things that need to be changed, even at the price of incompatibility.
If it comes down to a clear choice between making Common Lisp better
and doing what Maclisp does, we make Common Lisp better.

2. Despite point 1, we should be compatible with Maclisp and Lisp
Machine Lisp unless there is a good reason not to be.  Functions added
or subtracted are relatively innocuous, but incompatible changes to
existing functions should only be made with good reason and after
careful deliberation.  Common Lisp started as a Maclisp derivitive, and
we intend to move over much code and many users from the Maclisp
world.  The easier we make that task, the better it is for all of us.

3. If possible, consistent with points 1 and 2, we should not do
anything that screws people moving over from Interlisp.  The same holds
for the lesser-used Lisps, but with correspondingly less emphasis.  I
think that Lisp 1.5 should get no special treatment here: all of its
important features show up in Maclisp, and the ones that have changed or
dropped away have done so for good reason.

-- Scott
-------

∂06-Feb-82  1200	Daniel L. Weinreb <dlw at MIT-AI> 	Maclisp compatibility    
Date: Friday, 6 February 1981, 14:56-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: Maclisp compatibility
To: Fahlman at CMU-20C, common-lisp at SU-AI

Your message is exactly what I wanted to see.  This is just as much of a
policy as I think we need.  I didn't want any more rigid guidelines than
that; I just wanted a set of principles that we all agree upon.

Not everybody on the mailing list seems to agree with your set here.  I
do, by the way, but clearly HEDRICK does not.  I hope the official
referee will figure out what to do about this.  Guy?

∂06-Feb-82  1212	Daniel L. Weinreb <dlw at MIT-AI> 	Return values of SETF    
Date: Friday, 6 February 1981, 15:12-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: Return values of SETF
To: common-lisp at SU-AI

I'm pretty much convinced by Masinter's mail.  SETF should be defined to
return the value that it stores.  SETF is really too important a form to
work in an explicitly undefined method, and compiler optimizations
and/or special-purpose settting functions (that exist only so that SETF
can turn into them) are well worth it to keep SETF from having to have
crummy "undefined" behavior.  (Not having any kind of up-to-date Common
Lisp manual, I have no idea how or if it is currently defined.)

∂06-Feb-82  1232	Daniel L. Weinreb <dlw at MIT-AI> 	MVLet     
Date: Friday, 6 February 1981, 15:25-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: MVLet    
To: RPG at SU-AI, common-lisp at SU-AI

I see your point.  I agree; given this insight, I am happy with the Lispm
syntax plus MVCALL.  There is one thing that I'd like to see improved,
if possible.  In the example:

		(MV-CALL #'(LAMBDA (FOO BAR (:REST ZTESCH)) ...)
			 (BAZOLA))

the order of events is that BAZOLA happens first, and the body of the
function happens second.  This has the same problem that
lambda-combinations had; LET was introduced to solve the problem.  If
anyone can figure out something that solves this problem for MV-CALL
without any other ill effects, I'd like to know about it.  One
possibility is to simply switch the order of the two subforms; what do
people think about that?

However, I'm not trying to be a troublemaker.  If nobody comes up with a
widely-liked improvement, I will be happy to accept the proposal as it
stands.

∂06-Feb-82  1251	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Maclisp compatibility 
Date:  6 Feb 1982 1547-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: Maclisp compatibility
To: dlw at MIT-AI
cc: Fahlman at CMU-20C, common-lisp at SU-AI
In-Reply-To: Your message of 6-Feb-82 1506-EST

No, I think the approach suggested by the folks at CMU is fine.
-------

∂06-Feb-82  1416	Eric Benson <BENSON at UTAH-20> 	Re: Maclisp compatibility  
Date:  6 Feb 1982 1513-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: Re: Maclisp compatibility
To: Fahlman at CMU-20C, common-lisp at SU-AI
In-Reply-To: Your message of 5-Feb-82 2341-MST

"Lisp 1.5 should get no special treatment here: all of its important features
show up in Maclisp, and the ones that have changed or dropped away have done
so for good reason."

I am curious about one feature of Lisp 1.5 (and also Standard Lisp) which was
dropped from Maclisp.  I am referring to the Flag/FlagP property list functions.
I realize that Put(Symbol, Indicator, T) can serve the same function, but I
can't see any good reason why the others should have been dropped.  In an
obvious implementation of property lists Put/Get can use dotted pairs and
Flag/FlagP use atoms, making the property list itself sort of a corrupted
association list.  Maclisp and its descendants seem to use a flat list of
alternating indicators and values.  It isn't clear to me what advantage this
representation gives over the a-list.  Were Flag and FlagP dropped as a
streamlining effort, or what?
-------

∂06-Feb-82  1429	Howard I. Cannon <HIC at MIT-MC> 	Return values of SETF
Date: 6 February 1982 17:23-EST
From: Howard I. Cannon <HIC at MIT-MC>
Subject:  Return values of SETF
To: common-lisp at SU-AI
cc: dlw at MIT-AI

I strongly agree.  I have always thought it a screw that SETF did not return
a value like SETQ.  It sometimes makes for more compact, readable, and convenient
coding.

∂06-Feb-82  2031	Fahlman at CMU-20C 	Value of SETF  
Date:  6 Feb 1982 2328-EST
From: Fahlman at CMU-20C
Subject: Value of SETF
To: common-lisp at SU-AI


Just for the record, I am also persuaded by Masinter's arguments for
having SETF return the value that it stores, assuming that RPLACA and
RPLACD are the only forms that want to do something else.  It would
cause no particular problems in the Spice implementation to add two new
primitives that are like RPLACA and RPLACD but return the values, and
the additional uniformity would be well worth it.

-- Scott
-------

∂06-Feb-82  2102	Fahlman at CMU-20C 	Re: MVLet      
Date:  6 Feb 1982 2354-EST
From: Fahlman at CMU-20C
Subject: Re: MVLet    
To: dlw at MIT-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 6-Feb-82 1536-EST


DLW's suggestion that we switch the order of arguments to M-V-CALL, so
that the function comes after the argument forms, does not look very
attractive if you allow more than one argument form.  This would be the
universally reviled situation in which a single required argument comes
after a rest arg.

As currently proposed, with the function to be called as the first arg,
M-V-CALL exactly parallels the format of FUNCALL.  (The difference, of
course, is that M-V-CALL uses all of the values returned by each of the
argument forms, while FUNCALL accepts only one value from each argument
form.)

-- Scott
-------

∂07-Feb-82  0129	Richard Greenblatt <RG at MIT-AI>  
Date: 7 February 1982 04:26-EST
From: Richard Greenblatt <RG at MIT-AI>
To: common-lisp at SU-AI

Re compatibility, etc
  Its getting really hard to keep track of
where things "officially" stand.   Hopefully,
the grosser of the suggestions that go whizzing
by on this mailing list are getting flushed,
but I have this uneasy feeling that one
of these days I will turn around and find
there has been "agreement" to change something
really fundamental like EQ.
  Somewhere there should be a clear and current summary
of "Proposed Changes which would change
the world."  What I'm talking about here are cases
where large bodies of code can reasonably be
expected to be affected, or changes or extensions
time honored central concepts like MEMBER or LAMBDA.
  It would be nice to have summaries from time to time
on the new frobs (like this MV-LET thing) that are proposed
but that is somewhat less urgent.

∂07-Feb-82  0851	Fahlman at CMU-20C  
Date:  7 Feb 1982 1149-EST
From: Fahlman at CMU-20C
To: RG at MIT-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 7-Feb-82 0426-EST


I feel sure that no really incompatible changes will become "official"
without another round of explicit proposal and feedback, though the
group has grown so large and diverse that we can no longer expect
unanimity on all issues -- we will have to be content with the emrgence
of substantial consensus, especially among those people representing
major implemenation efforts.  Of course, there is a weaker form of
"acceptance" in which a proposal seems to have been accepted by all
parties and therefore becomes the current working hypothesis, pending an
official round of feedback.

-- Scott
-------

∂07-Feb-82  2234	David A. Moon <Moon at MIT-MC> 	Flags in property lists
Date: Monday, 8 February 1982, 01:31-EST
From: David A. Moon <Moon at MIT-MC>
Subject: Flags in property lists
To: Eric Benson <BENSON at UTAH-20>
Cc: common-lisp at SU-AI

Flat property lists can be stored more efficiently than pair lists
in Lisp with cdr-coding.  That isn't why Maclisp dropped them, of
course; probably Maclisp dropped them because they are a crock and
because they make GET a little slower, which slows down the
interpreter in a system like Maclisp that stores function definitions
on the property list.

∂08-Feb-82  0749	Daniel L. Weinreb <DLW at MIT-MC> 	mv-call   
Date: 8 February 1982 10:48-EST
From: Daniel L. Weinreb <DLW at MIT-MC>
Subject: mv-call
To: common-lisp at SU-AI

I guess my real disagreement with mv-call is that I don't like to see it
used with more than one form.  I have explained before that the mv-call
with many forms has the effect of concatenating together the returned
values of many forms, which is something that I cannot possibly imagine
wanting to do, givn the way we use multiple values in practice today.  (I
CAN see it as useful in a completely different programming style that is so
far unexplored, but this is a standardization effort, not a language
experiment, and so I don't think that's relevant.)  This was my original
objection to mv-call.

RPG's message about mv-call shows how you can use it with only one form to
get the effect of the new-style lambda-binding multiple-value forms, and
that looked attractive.  But I still don't like the mv-call form when used
with more than one form.

I do not for one moment buy the "analogy with funcall" argument.  I think
of funcall as a function.  It takes arguments and does something with them,
namely , aply the firsrt to the rest.  mv-call is most certainly not a
function: it is a special form.  I think that in all important ways,
what it does is different in kind and spirit from funcall.  Now, I realize
that this is a matter of personal philosophy, and you may simply not feel
this way.

Anyway, I still don't want to make trouble.  So while I'd prefer having
mv-call only work with one form, and then to have the order of its subforms
reversed, I'll go along with the existing proposal if nobody supports me.

∂08-Feb-82  0752	Daniel L. Weinreb <DLW at MIT-MC>  
Date: 8 February 1982 10:51-EST
From: Daniel L. Weinreb <DLW at MIT-MC>
To: common-lisp at SU-AI

I agree with RG, even after hearing Scott's reply.  I would like to
see, in the next manual, a section prominently placed that summarizes
fundamental incompatibilities with Maclisp and changes in philosophy,
especially those that are not things that are already in Zetalisp.
For those people who have not been following Common Lisp closely,
and even for people like me who are following sort of closely, it would
be extremely valuable to be able to see these things without poring
over the entire manual.

∂08-Feb-82  1256	Guy.Steele at CMU-10A 	Flat property lists   
Date:  8 February 1982 1546-EST (Monday)
From: Guy.Steele at CMU-10A
To: benson at utah-20
Subject:  Flat property lists
CC: common-lisp at SU-AI
Message-Id: <08Feb82 154637 GS70@CMU-10A>

LISP 1.5 used flat property lists (see LISP 1.5 Programmer's Manual,
page 59).  Indeed, Standard LISP is the first I know of that did *not*
use flat property lists.  Whence came this interesting change, after all?
--Guy

∂08-Feb-82  1304	Guy.Steele at CMU-10A 	The "Official" Rules  
Date:  8 February 1982 1559-EST (Monday)
From: Guy.Steele at CMU-10A
To: rg at MIT-AI
Subject:  The "Official" Rules
CC: common-lisp at SU-AI
Message-Id: <08Feb82 155937 GS70@CMU-10A>

Well, I don't know what the official rules are, but my understanding
was that my present job is simply to make the revisions decided
upon in November, and when that revised document comes out we'll have
another round of discussion.  This is not to say that the discussion
going on now is useless.  I am carefully saving it all in a file for
future collation.  It is just that I thought I was not authorized to
make any changes on the basis of current discussion, but only on what
was agreed upon in November.  So everyone should rest assured that a
clearly labelled document like the previous "Discussion" document
will be announced before any other "official" changes are made.

(Meanwhile, I have a great idea for eliminating LAMBDA from the language
by using combinators...)
--Guy

∂08-Feb-82  1410	Eric Benson <BENSON at UTAH-20> 	Re:  Flat property lists   
Date:  8 Feb 1982 1504-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: Re:  Flat property lists
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI
In-Reply-To: Your message of 8-Feb-82 1346-MST

I think I finally figured out what's going on.  Indeed every Lisp dialect I
can find a manual for in my office describes property lists as flat lists
of alternating indicators and values.  The dialects which do have flags
(Lisp 1.5 and Lisp/360) appear to just throw them in as atoms in the flat
list.  This obviously leads to severe problems in synchronizing the search
down the list!  Perhaps this is the origin of Moon's (unsupported) claim
that flags are a crock.  Flags are not a crock, but the way they were
implemented certainly was!  This must have led to their elimination in more
recent dialects, such as Stanford Lisp 1.6, Maclisp and Interlisp.
Standard Lisp included flags, but recent implementations have used a more
reasonable implementation for them, by making the p-list resemble an a-list
except for the atomic flags.  Even without flags, an a-list seems like a
more obvious implementation to me, since it reflects the structure of the
data.  There is NO cost difference in space or speed (excluding cdr-coding)
between a flat list and an a-list if flags are not included.  The presence
of flags on the list requires a CONSP test for each indicator comparison
which would otherwise be unnecessary.

Much of the above is speculation.  Lisp historians please step forward and
correct me.
-------

∂08-Feb-82  1424	Don Morrison <Morrison at UTAH-20> 	Re:  Flat property lists
Date:  8 Feb 1982 1519-MST
From: Don Morrison <Morrison at UTAH-20>
Subject: Re:  Flat property lists
To: Guy.Steele at CMU-10A
cc: benson at UTAH-20, common-lisp at SU-AI
In-Reply-To: Your message of 8-Feb-82 1346-MST

Stanford LISP 1.6  (which predates  "Standard" LISP)  used a-lists  for
instead of flat  property lists.   See the  manual by  Quam and  Diffie
(SAILON 28.7), section 3.1.  

It was also mentioned a message or two ago that even in implementations
without cdr-coding  flat  property  lists are  more  efficient.   Would
someone explain to me why?   If we assume that  cars and cdrs cost  the
same and do not have flags (Stanford LISP 1.6 does not have flags) then
I see no difference in  cost.  And certainly the a-list  implementation
is a bit more perspicuous. There's  got to be a reason besides  inertia
why nearly all LISPs use flat property lists.  But in any case,  Common
LISP has no  business telling  implementers how  to implement  property
lists -- simply explain the semantics of PutProp, GetProp, and RemProp,
or whatever they end up being called and leave it to the implementer to
use a  flat  list, a-list,  hash-table,  or,  if he  insists,  a  flat,
randomly ordered list of triples.  It should make no difference to  the
Common LISP definition. 
-------

∂08-Feb-82  1453	Richard M. Stallman <RMS at MIT-AI>
Date: 8 February 1982 16:56-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

In my opinion, the distinction between functions and special
forms is not very important, and Mv-call really is like funcall.

∂19-Feb-82  1656	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Revised sequence proposal 
Date: 19 Feb 1982 1713-EST
From: Scott E. Fahlman <FAHLMAN at CMU-20C>
To: common-lisp at SU-AI
Subject: Revised sequence proposal
Message-ID: <820118171315FAHLMAN@CMU-20C>

At long last, my revised revised proposal for sequence functions is
ready for public perusal and comment.  Sorry for the delay -- I've been
buried by other things and this revision was not as trivial to prepare
as I had expected -- several false starts.

The proposal is in the files <FAHLMAN>NNSEQ.PRESS and <FAHLMAN>NNSEQ.DOC
on CMU-20C.

-- Scott
   --------

∂20-Feb-82  1845	Scott.Fahlman at CMU-10A 	Revised sequence proposal    
Date: 20 February 1982 2145-EST (Saturday)
From: Scott.Fahlman at CMU-10A
To: common-lisp at su-ai
Subject:  Revised sequence proposal
Message-Id: <20Feb82 214553 SF50@CMU-10A>


...is also on CMUA as TEMP:NNSEQ.PRE[C380SF50] and also .DOC.  It might
be easier for some folks to FTP from there.
-- Scott

∂21-Feb-82  2357	MOON at SCRC-TENEX 	Fahlman's new new sequence proposal, and an issue of policy 
Date: Monday, 22 February 1982  02:50-EST
From: MOON at SCRC-TENEX
To:   common-lisp at sail
Subject:Fahlman's new new sequence proposal, and an issue of policy

CMU-20C:<FAHLMAN>NNSEQ.DOC seems to be a reasonable proposal; let's accept
it and move on to something else.  A couple nits to pick:

I don't understand the type restrictions for CONCAT.  Is (vector fixnum) a
subtype of (vector t)?  Is (vector (mod 256.)) a subtype of (vector t)?
Presumably all 3 of these types require different open-coded access
operations on VAXes, so if CONCAT allows them to be concatenated without
explicit coercions then the type restriction is inutile.  I would suggest
flushing the type restrictions but retaining the output-type specifier.
After all, the overhead is only a type dispatch for each argument; the
inner loops can be open-coded on machines where that is useful.  The
alternative seems to be to have implementation-dependent type restrictions,
something we seem to have decided to avoid totally.

mumble-IF-NOT is equally as useful as mumble-IF, if you look at how they
are used.  This is because the predicate argument is rarely a lambda, but
is typically some pre-defined function, and most predicates do not come in
complementary versions.  (Myself, I invariably write such things with
LOOP, so I don't have a personal axe to grind.)

REMOVE should take :start/:end (perhaps the omission of these is just a
typo).


A possible other thing to move on to: It's pretty clear that the more
advanced things like the error system, LOOP, the package system, and
possibly the file system aren't going to be reasonable to standardize on
for some time (say, until the summer).  As far as packages go, let's say
that there are keywords whose names start with a colon and leave the rest
for later; keywords are the only part of packages that is really pervasive
in the language.  As far as errors go, let's adopt the names of the
error-reporting functions in the new Lisp machine error system and leave
the details of error-handling for a later time.  I'd like to move down to
some lower-level things.  Also I'm getting extremely tired of the large
ratio of hot air to visible results.  There are two things that are
important to realize:  We don't need to define a complete and comprehensive
Lisp system as the first iteration of Common Lisp for it to be useful.  If
the Common Lisp effort doesn't show some fruit soon people are going to
start dropping out.

We should finish defining the real basics like the function-calling
mechanism, evaluation, types, and the declaration mechanism.  Then we ought
to work on defining a kernel subset of the language in terms of which the
rest can be written (not necessarily efficiently); the Common Lisp
implementation in terms of itself may not actually be used directly and
completely by any implementation, but will provide a valuable form of
executable documentation as well as an important aid to bringing up of new
implementations.  Then some people should be delegated to write such code.
Doing this will also force out any fuzzy thinking in the basic low-level
stuff.

This is, in fact, exactly the way the Lisp machine system is structured.
The only problem is that it wasn't done formally and could certainly
benefit from rethinking now that we have 7 years of experience in building
Lisp systems this way behind us.  From what I know of VAX NIL, Spice Lisp,
and S-1 NIL, they are all structured this way also.

Note also that this kernel must include not only things that are in the
language, but some basic tools which ought not to have to be continuously
reinvented; for example the putative declaration system we are assuming
will exist and solve some of our problems, macro-writing tools, a
code-walking tool (which the new syntax for LOOP, for one, will implicitly
assume exists).

∂22-Feb-82  0729	Griss at UTAH-20 (Martin.Griss)    
Date: 22 Feb 1982 0820-MST
From: Griss at UTAH-20 (Martin.Griss)
To: MOON at SCRC-TENEX
cc: Griss
In-Reply-To: Your message of 22-Feb-82 0113-MST
Remailed-date: 22 Feb 1982 0827-MST
Remailed-from: Griss at UTAH-20 (Martin.Griss)
Remailed-to: common-lisp at SU-AI

Re: Moon's comment on middle-level code as "working" documentation. That is
exactly the route we have been following for PSL at Utah; in the process of
defining and porting our Versions 2 and 3 systems from 20 to VAX to Apollo
domain, a lot of details have been discussed and issues identified.
In order for us to become involved and for others to begin some sort of
implementation, a serious start has to be made on these modules.

We certainly would like to use PSL as starting point for a common lisp
implementation, and this would only happen when LISP sources and firm
agreement on some modules has been reached. We have hopes of PSL running on
DEC-20, VAX, 68000, 360/370 and CRAY sometime later in the year, and would
be delighted to have PSL as a significnat sub-set of Common LISP, if not
more. But right now, there is not much to do.

Martin Griss
-------


∂08-Feb-82  1222	Hanson at SRI-AI 	common Lisp 
Date:  8 Feb 1982 1220-PST
From: Hanson at SRI-AI
Subject: common Lisp
To:   rpg at SU-AI
cc:   hanson

	I would indeed like to influence Common Lisp, if it is not
too late, and if any of the deficiencies of FranzLisp are about to
be repeated.  There are a number of people here in the Vision group
who have various ideas and experiences with other Lisps that I can
try and stand up for.
	As I am pretty much stuck with FranzLisp on the Image Understanding
Testbed, there are a number of things that concern me which may or may not
have been considered in Common Lisp.  Among them are:
	* Sufficient IO flexibility to give you redirection from devices
to files (easy in Franz due to Unix's treating devices as files, possible
problems in other environments)
	* Single character IO to allow the construction of Command-completion
monitors in Lisp, etc. (Impossible without special Hackery in Franz since
it always waits for a line feed before transmitting a line.)
	* An integrated extensible screen editor like our current VAX/Unix/emacsor like the Lisp Machine editor.  Fanciness of the raw environment is not
a virtue. Let the extensibility take care of that.
	* USER-ORIENTED STRING MANIPULATION utilities.  Franz is a total loser
here - after a certain number of (implode (car (aexploden foo)))'s one begins
to lose one's sense of humor.
	* FLOATING POINT COMPUTATION that is as fast as the machine can go.
The VAX is pretty slow as it is, without having Lisp overhead in the way
when you want to do a convolution on a 1000x1000 picture file.
	* SPECIAL TWO-DIM DATA STRUCTURES allowing very fast access and
arithmetic, both 8-bit, 16-bit, and short and long floating point, for
such things as image processing, edge operators, convolutions.  I don't
know what you would do here, but possibly special matrix multiplying
SW down at the bottom level would be a start - one needs all kinds of
matrix arithmetic primitives to work analogously to the string primitives.
	Also, I've heard it said that special text primitives are also
desirable to write an efficient EMACS in Lisp.
	* DYNAMIC LINKING OF FOREIGN LANGUAGES.  You should be able to
do for almost anything what Franzlisp does for C, but with some far
better mechanism for passing things like strings BACK UP to Lisp (not
possible without hackery in Franz).  We want to be able to use Lisp
as an executive to run programs and maybe even subroutines written in
any Major language onthe VAX.

	-That's all I can think of for now, except maybe a device-independent
interactive Graphics package.  Some of us would be delighted to get together
and talk again as soon as you think it might be productive for the future
of Common Lisp.
	--Andy Hanson 415-859-4395  HANSON@SRI-AI
-------

∂28-Feb-82  1158	Scott E. Fahlman <FAHLMAN at CMU-20C> 	T and NIL  
Date: 28 Feb 1982 1500-EST
From: Scott E. Fahlman <FAHLMAN at CMU-20C>
To: common-lisp at SU-AI
Subject: T and NIL
Message-ID: <820127150012FAHLMAN@CMU-20C>


OK, folks, the time has come.  We have to decide what Common Lisp is
going to do about the things that have traditionally been called T and
NIL before we go on any farther.  Up until now, we have deferred this
issue in the hope that people's positions would soften and that their
commitment to Common Lisp would increase over time, but we can't leave
this hanging any longer.  Almost any decision would be better than no
decision.

It is clear that this is an issue about which reasonable people can
differ (and about which unreasonable people can also differ).  I think
that most of us, if we were designing a Lisp totally from scratch, would
use something other than the symbols T and NIL as the markers for truth,
falsity, and list-emptiness.  Most of us have written code in which we
try to bind T as a random local, only to be reminded that this is
illegal.  Most of us have been disgusted at the prospect of taking the
CAR and CDR of the symbol NIL, but the advantages of being able to CDR
off the end of a list, in some situations, are undeniable.

On the other hand, the traditional Maclisp solution works, is used in
lots of code, and feels natural to lots of Lisp programmers.  Should we
let mere aesthetics (and arguable aesthetics at that) push us into
changing such a fundamental feature of the language?  At the least, this
requires doing a query-replace over all code being imported from
Maclisp; at worst, it may break things in subtle ways.

What it comes down to is a question of the relative value that each
group places on compatibility versus the desire to fix all of the things
that Maclisp did wrong.  The Lisp Machine people have opted for
compatibility on this issue, and have lots of code and lots of users
committed to the old style.  The VAX NIL people have opted for change,
with the introduction of special empty-list and truth objects.  They too
have working code embodying their decision, and are loathe to change.
The Spice Lisp group has gone with an empty-list object, but uses the
traditional T for truth.

What we need is some solution that is at least minimally acceptable to
all concerned.  It would be a real shame if anyone seceded from the
Common Lisp effort over so silly an issue, especially if all it comes
down to is refusing to do a moby query-replace.  However, in my opinion,
it would be even more of a shame if we left all of this up to the
individual implementors and tried to produce a language manual that
doesn't take a stand one way or the other.  Such a manual is guaranteed
to be confusing, and it is something that Common Lisp would have to live
with for many years, long after the present mixture of people and
projects has become irrelevant.  Either solution, on either of these
issues, is preferable to straddling the fence and having to say that
predicates return some "unspecified value guaranteed to be non-null" or
words to that effect.

On the T issue, the proposals are as follows:

1. Truth is represented by the symbol T, whose only special property is
that its value is permanently bound to T.

2. Truth (and also special input values to certain functions?) is
represented by a special truthity object, not a symbol.  This object is
represented externally as #T, and it presumably evaluates to itself.  In
this proposal, T is just another symbol with no special properties.

2A. Like proposal 2, but the symbol T is permanently bound to #T, so
that existing code with constructs like (RETURN T) doesn't break.

3. Implementors are free to choose between 1 and 2A.  Predicates are
documented as returning something non-null, but it is not specified what
this is.  It is not clear what to do about the T in CASE statements or
as the indicator of a default terminal stream.

I think this case is pretty clear.  As far as I can tell, everyone wants
to go with option 1 except JONL and perhaps some others associated with
VAX NIL, who already have code that uses #T.  Option 2 would allow us to
use T as a normal variable, which would be nice, but would break large
amounts of existing code.  Option 2A would break much less code, but if
T is going to be bound permanently to something, it is hard to see a
good reason not just to bind it to T.  Option 3 is the sort of ugly
compromise I discussed above.

If, as it appears, VAX NIL could be converted to using T with only a day
or so of effort, I think that they should agree to do this.  It would be
best to do this now, before VAX NIL has a large user community.  If there
are deeper issues involved than just having some code and not wanting to
change, a clear explanation of the VAX NIL position would be helpful.

The situation with respect to NIL is more complex.  The proposals are as
follows:

1. Go with the Maclisp solution.  Both "NIL" and "()" read in as the
symbol NIL.  NIL is permanently bound to itself, and is unique among
symbols in that you can take its CAR and CDR, getting NIL in either
case.  In other respects, NIL is a normal symbol: it has a property
list, can be defined as a function (Ugh!) and so on.  SYMBOLP, ATOM, and
NULL of NIL are T; CONSP of NIL is NIL; LISTP of NIL is controversial,
but probably should be T.

2. Go with the solution in the Swiss Cheese edition.  There is a
separate null object, sometimes called "the empty list", that is written
"()".  This object is used by predicates and conditionals to represent
false, and it is also the end-of-list marker.  () evaluates to itself,
and you can take the CAR and CDR of it, getting () in either case.
NULL, ATOM, and LISTP of () are T; CONSP and SYMBOLP of () are ().
Under this proposal, the symbol NIL is a normal symbol in all respects
except that its value is permanently bound to ().

3. Allow implementors to choose either 1 or 2.  For this to work, we
must require that the null object, whatever it is, prints as "()", at
least as an option.  Users must not represent the null object as 'NIL,
though NIL, (), and '() are all OK in contexts that evaluate them.  The
user can count on ATOM and NULL of () to be T, and CONSP of () to be ().
SYMBOLP of () is officailly undefined.  LISTP of () should be defined to
be T, so that one can test for CAR-ability and CDR-ability using this.

VAX NIL and Spice Lisp have gone with option 2; the Lisp Machine people
have stayed with option 1, and have expressed their disinclination to
convert to option 2.  Most of us in the Spice Lisp group were suspicious
of option 2 at first, but accepted it as a political compromise; now the
majority of our group has come to like this scheme, quite apart from
issues of inertia.  I would point out that option 2 breaks very little
existing code, since you can say things like (RETURN NIL) quite freely.
Code written under this scheme looks almost like code written for
Maclisp -- a big effort to change one's style is not necessary.  It is
necessary, however, to go through old code and convert any instances of
'NIL to NIL, and to locate any occurrences of NIL in contexts that
implicitly quote it.  Option 3 is another one of those ugly compromises
that I believe we should avoid.  My own view is that I would prefer
either option 1 or 2, with whatever one-time inconvenience that would
imply for someone, to the long-term confusion of option 3.

I propose that the Lisp Machine people and other proponents of option 1
should carefully consider option 2 and what it would take to convert to
that scheme.  It is not as bad as it looks at first glance.  If you are
willing to convert, that would be the ideal solution.  If not, I can
state that Spice Lisp would be willing to revert to option 1 rather than
cause a major schism; I cannot, of course, speak for the VAX NIL people
or any other groups on this.

Let me repeat that we must decide this issue as soon as possible.  We
have made a lot of progress on the multiple value and sequence issues,
but until we have settled T and NIL, we can hardly claim that the
language specification is stabilizing.  It would be awfully nice to have
this issue more or less settled before we mass-produce the next edition
of the manual.

-- Scott
   --------

∂28-Feb-82  1342	Scott E. Fahlman <FAHLMAN at CMU-20C> 	T and NIL addendum   
Date: 28 Feb 1982 1640-EST
From: Scott E. Fahlman <FAHLMAN at CMU-20C>
To: common-lisp at SU-AI
Subject: T and NIL addendum
Message-ID: <820127164026FAHLMAN@CMU-20C>

It occurs to me that my earlier note discusses the T and NIL issue
primarily in terms of the positions taken by the Lisp Machine and
VAX NIL communities.  The reason for this, of course, is that these two
groups have taken strong and incompatible positions that somehow have to
be resolved if we are to keep them both in the Common Lisp camp.  I did
not mean to imply that we are uninterested in the views and problems of
other implementations, existing or planned, or of random kibitzers for
that matter.

-- Scott
   --------

∂28-Feb-82  1524	George J. Carrette <GJC at MIT-MC> 	T and NIL.    
Date: 28 February 1982 18:23-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  T and NIL.
To: COMMON-LISP at SU-AI

#+FLAME-MODE '|
Existing bodies of working code are absolutely no consideration for
the NIL implementors in this particular issue. I guess this might be
obvious, but why should we be so callously radical? It is simply
that we have reason to beleive that existing code which depends
on the present relationship between list and symbol semantics
and predicate semantics which will not run as-is in NIL are
execedingly easy to find and fix. We also beleive that the
existing lisp 1.5 semantics are inadvertently overloaded,
implying that GET, PUTPROP, SYMEVAL, and other symbol primitives
may be used on the return value of predicates and the empty list, and
needlessly implying that evaluation-semantics need not reflect
datatype-semantics. |


In bringing up Macsyma and other originally pdp10-maclisp code in NIL,
I have found it much easier to deal with the predicate-issue than with
the fact that CAR and CDR do error-checking. Well, the CAR/CDR problem
had already been "smoked" out of Macsyma by the Lispmachine. There
was no need to do any QUERY-REPLACE, and no subtle bugs.
(Non-trivial amounts of LISPMACHINE code were also snarfed for use in NIL,
 although Copyright issues [NIL wants to be publick domain] may force
 a rewrite of these parts. The only Lispmachine code which depended on
 () being a symbol explicitly said so in a comment, since probably the
 author felt "funny" about it in the first place.)

There was only one line of Macsyma which legally depended on the return values
of predicates other than their TRUTHITY or FALSITY. There were a few more
lines of Macsyma which depended illegally on the return value of
predicates. These were situations where GET, PUTPROP, and REMPROP
were being used on the return value of predicate-like functions,
e.g. using REMPROP on the return value of the "CDR-ASSQ" idiom, using
GET on the return value of GET. In good-old "bare" pdp-10 maclisp
with only one program running in it, this is not a problem, but
=> On the lispmachine, which has a large environment, many usages of
   property lists, it can be very dangerous for programs to unwitingly
   share the property lists of global symbols T and NIL. <=

The other part of the picture is that we know we can write code
which doesn't have things like #T in it, and which will run in
COMMON-LISP regardless of what COMMON-LISP does.

-gjc

∂28-Feb-82  1700	Kim.fateman at Berkeley 	smoking things out of macsyma 
Date: 28 Feb 1982 16:35:58-PST
From: Kim.fateman at Berkeley
To: COMMON-LISP@SU-AI, GJC@MIT-MC
Subject: smoking things out of macsyma


I really doubt that all problems are simple
to smoke out;  in fact, I suspect that there are still places
where the Lisp Machine version of Macsyma fails for mysterious
reasons.  These may be totally unrelated to T vs #T or NIL vs (),
but I do not see how GJC can be so confident.

For example, when we brought Macsyma up on the VAX, (after it
had allegedly been brought up on a CADR) we found
places where property lists were found by computing CAR of atoms;
we found a number of cases of (not)working-by-accident functions whose 
non-functionality was noticed only when run on the VAX with a modest
amount of additional error checking. (e.g. programs which should
have bombed out just chugged along on the pdp-10).

GJC claims there is (was?) only one line of Macsyma which legally
depends on other-than truthity of a predicate. I believe this is
false, but in any case, a  proof of his claim would require rather 
extensive analysis. Whichever way this decision goes (about NIL or ()),
I would be leery of making too much of GJC's note for supporting evidence.

∂28-Feb-82  1803	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Re:  T and NIL. 
Date: 28 Feb 1982 2105-EST
From: Scott E. Fahlman <FAHLMAN at CMU-20C>
To: GJC at MIT-MC, COMMON-LISP at SU-AI
Subject: Re:  T and NIL.
Message-ID: <820127210537FAHLMAN@CMU-20C>
Regarding: Message from George J. Carrette <GJC at MIT-MC>
              of 28-Feb-82 1823-EST

I am not sure that I completely understand all of your (GJC's) recent
message.  Some of the phrases you use ("the predicate-issue", for
example, and some uses of "illegal") might be read in several ways.  I
want to be very sure that I understand your views.  Is the following a
reasonable summary, or am I misreading you:

1. The VAX NIL group's preference for separate truth and
empty-list/false objects is not primarily due to your investment in
existing code, but rather because you are concerned about the unwisdom
of overloading the symbols T and NIL.

2. On the basis of your experience in porting large programs from
Maclisp to NIL, you report that very few things have to be changed and
that it is very easy to find them all.

3. If, nevertheless, the Common Lisp community decides to go with the
traditional Maclisp use of T and NIL as symbols, you will be able to
live with that decision.

-- Scott
   --------

∂28-Feb-82  2102	George J. Carrette <GJC at MIT-MC> 	T and NIL.    
Date: 1 March 1982 00:02-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  T and NIL.
To: FAHLMAN at CMU-20C
cc: COMMON-LISP at SU-AI

1. Right, not much VAX-NIL code written in LISP depends on this T and NIL issue.
2. Right, no query-replace was needed, no subtle bugs lurking due to this.
   I did make a readtable for Macsyma so that NIL read in as ().
3. Here I meant that the "T and NIL" thing is not an important
   TRANSPORTABILITY issue. Code which does not depend on the overloading
   will indeed run. But building the overloading into NIL at this point
   will cost something. I'm not sure it is worth it.


∂28-Feb-82  2333	George J. Carrette <GJC at MIT-MC> 	Take the hint.
Date: 1 March 1982 02:33-EST
From: George J. Carrette <GJC at MIT-MC>
Subject: Take the hint.
To: Kim.fateman at UCB-C70
cc: COMMON-LISP at SU-AI

I really wish you wouldn't use the COMMON-LISP mailing
list for sales-pitches about how much better your Franz
implementation of Macsyma is than the Lispmachine implementation,
especially when it comes down to such blatant mud-slinging
as saying that you "suspect that there are still places
where the Lisp Machine version of Macsyma fails for mysterious
reasons."

Just because GJC mentions the magic word Macsyma doesn't mean you
have to take it as a cue to flame. What you said had nothing
to do with the concerns of COMMON-LISP. Who do you think cares about
what you "suspect" about the Lispm?


∂01-Mar-82  1356	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: T and NIL   
Date:  1 Mar 1982 1211-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: T and NIL
To: FAHLMAN at CMU-20C
cc: common-lisp at SU-AI
In-Reply-To: Your message of 28-Feb-82 1500-EST

If you are calling for a vote, here are mine.

On truth:  1, 2A, 2, 3.  As long as you are going to say that everything
non-NIL (non-()?) is true, it seems completely pointless to add a new
data-type to represent truth.

On emptiness:  1, 2, 3.

I feel very strongly about the undesirability of allowing differences
among implementations.  I feel less strongly about the undesirability
of changing T and NIL to #T and ().

Mostly, I simply don't understand the reason for changing NIL and T.  I
thought the goal of CL was to make changes only when there is some
reason for them.  The only reason I can figure out is that people find
it inelegant that T and NIL look like symbols but don't quite work like
normal symbols.  However it seems at least as inelegant to add a new data
type for each of them.  Particularly when the most likely proposals
leave NIL and T so they can't be rebound, thus not really solving the
problem of having NIL and T be odd.

By the way, I have another issue that is going to sound trivial at
first, but which may not end up to be:  Does anyone care about whether
Lisp code can be discussed verbally?  How are you going to read #T and
() aloud (e.g. in class, or when helping a user over the phone)?  I
claim the best pronunciation of () is probably the funny two-toned bleep
used by the Star Trek communicators, but I am unsure how to get it in
class.  In fact, if you end up with 2A and 2, which seem the most likely
"compromises", people are going to end up reading #T and () as "t" and
"nil".  That is fine as long as no one actually uses T and NIL as if
they were normal atoms.  But if they do, imagine talking (or thinking)
about a program that had a list (NIL () () NIL).

By the way, if you do decide to use proposal 1 for NIL, please consider
disallowing NIL as a function.  It seems that it is going to be worse
for us to allow NIL as a function than to implement property lists or
other attributes.
-------

∂01-Mar-82  2031	Richard M. Stallman <RMS at MIT-AI> 	Pronouncing ()    
Date: 1 March 1982 23:30-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: Pronouncing ()
To: common-lisp at SU-AI

If () becomes different from NIL, there will not be any particular
reason to use the symbol NIL.  Old code will still have NILs that are
evaluated, but in those places, NIL will be equivalent to ().

So there will rarely be a need to distinguish between the symbol NIL
and ().  It will be no more frequent than having to distinguish
between LIST-OF-A and (A) or between TWO and 2.  When the problem does
come up, it will not be insuperable, just a nuisance, like the other
two problems.

Alternatively, we might pronounce () as "empty" or "false".

∂01-Mar-82  2124	Richard M. Stallman <RMS at MIT-AI> 	() and T.    
Date: 1 March 1982 23:43-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: () and T.
To: common-lisp at SU-AI

I believe that () should be distinuished from NIL because
it is good if every data type is either all true or all false.
I don't like having one symbol be false and others true.

Another good result from distinguishing between () and NIL is
that the empty list can be LISTP.

For these reasons, I think that the Lisp machine should convert
to option 2 for NIL.

The situation for T is different.  Neither of those advantages
has a parallel for the case of T and #T.  It really doesn't matter
what non-() object is returned by predicates that want only to return
non-falsity, so the symbol T is as good as any.  There is no reason
to have #T as distinct from T.  However, option 3 is not really ugly.
Since one non-() value is as good as another, there is no great need
to require what value the implementation must use.  I prefer option 1,
but I think option 3 is nearly as good.

Meanwhile, let's have the predicates SYMBOLP, NUMBERP, STRINGP and CONSP
return their arguments, to indicate truth.  This makes possible the
construction
  (RANDOM-FUNCTION (OR (SYMBOLP expression) default))
where default might eval to a default symbol or might be a call to ERROR.
To do this now, we must write
  (RANDOM-FUNCTION (LET ((TEM expression))
		     (IF (SYMBOLP TEM) TEM
		       default)))
LISTP should probably return its argument when it is a non-() list.
(LISTP ()) should return some non-() list, also.
ATOM should return its argument if that is not ().
(ATOM ()) should return T.  Then ATOM's value is always an atom.

The general principle is: if a predicate FOO-P is true if given
falsehood as an argument, FOO-P should always return an object
of which FOO-P is true.
If, on the other hand, FOO-P is false when given falsehood as an
argument, then FOO-P should always return its argument to
indicate truth.

These two principles can be applied whether or not () and NIL
are the same.  If applied, they minimize the issue about T and #T.

∂02-Mar-82  1233	Jon L White <JONL at MIT-MC> 	NIL versus (), and more about predicates.    
Date: 2 March 1982 14:28-EST
From: Jon L White <JONL at MIT-MC>
Subject: NIL versus (), and more about predicates.
To: Fahlman at CMU-10A
cc: common-lisp at SU-AI


NIL and ()

  RMS just raised up several important points about why it would
  be worth the effort to distinguish the empty list from the symbol 
  NIL.  Some years ago when the NIL effort addressed this question,
  we felt that despite **potential** losing cases, there would be
  almost no effort involved in getting existing MacLISP code to
  work merely by binding NIL at top level to ().   GJC's comments
  (flaming aside) seem to indicate that the effect of this radical
  change on existing code is indeed infinitesimal;  the major problem 
  is convincing the unconvinced hacker of this fact.  I've informally
  polled a number of LISPMachine users at MIT over the last year on 
  this issue, and the majority response is that the NIL/() thing is 
  unimportant, or at most an annoyance -- it pales entirely when compared 
  to the advantages of a **stable** system (hmmm, LISPM still changing!).

Return value of Predicates:

  However, we didn't feel that it would be so easy to get around
  the fact the function NULL is routinely used both to test for the 
  nullist, and for checking predicate values.  That seems to imply that 
  the nullist will still have to do for boolean falsity in the LISP
  world.

  Boolean truthity could be any non-null object, and #T is merely a 
  way of printing it.  As long as #T reads in as the canonical truth 
  value, then there is no problem with existing NIL code, for I don't 
  believe anyone (except in a couple of malice aforethought cases) 
  explicitly tries to distinguish #T from other non-null objects.  
  Certainly, we all could live with a decision to have #T read in as T.
  But note that if #T isn't unique, then there is the old problem, as 
  with NIL and () in MacLISP now, that two formats are acceptable for 
  read-in, but only one can be canonically chosen for printout;  it would 
  thus be *possible* for a program to get confused if it were being 
  transported from an environment where the distinction wasn't made into 
  one where it was made.

  Most "random" predicates in PDP10 MacLISP (i.e., predicates that
  don't really return any useful value other than non-false) return the 
  value of the atom *:TRUTH, rather than a quoted constant, so that it is 
  possible to emulate NIL merely by setq'ing this atom.

  At the Common-LISP meeting last November, my only strong position
  was that it would be unwise *at this point in time* to commit "random" 
  predicates to return a specific non-false value (such as the symbol T).  
  The reason is simply that such a decision effectively closes out the 
  possibility of ever getting a truthity different from the symbol T -- not 
  that there is existing code depending on #T.  Had the original designers 
  of LISP been a little more forward-looking (and hindsight is always better 
  than foresight!) they would have provided one predicate to test for nullist,
  and another for "false";  even if one particular datum implements both,
  it would encourage more "structure" to programs.   I certainly don't feel 
  that the nullist/"false" merger can be so easily ignored as the nullist/NIL 
  merger.

  The T case for CASE/SELECT is unique anyway -- the T there is unlike
  the T in cond clauses, since it is not evaluated.  This problem
  would come up regardless of what the truthity value is (i.e., the
  problem of the symbols T and OTHERWISE being special-cased by CASE)

∂02-Mar-82  1322	Jon L White <JONL at MIT-MC> 	NOT and NULL: addendum to previous note 
Date: 2 March 1982 16:17-EST
From: Jon L White <JONL at MIT-MC>
Subject: NOT and NULL: addendum to previous note
To: Common-Lisp at SU-AI

The merging of the functionality of NOT and NULL makes it
mechanically impossible to separate out the usages of null
as the nullist from those usages as "false";  this merger,
of course, was almost demanded by the lack of a "false" 
distinct from null.   In fact, both names, NOT and NULL,
are probably around since antiquity, but there has not been
the two separate functionalities.

∂02-Mar-82  1322	George J. Carrette <GJC at MIT-MC> 	T and NIL.    
Date: 2 March 1982 15:59-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  T and NIL.
To: FAHLMAN at CMU-20C
cc: COMMON-LISP at SU-AI

To give some perspective, the things to change in the existing NIL code
to support the T & NIL symbol overloading from lisp 1.5 are:
[1] Change the type code of the () object to be of type SYMBOL.
[2] Change every primitive which acts on symbols to have a
    special case check like (DEFUN PLIST (X) (%PLIST (SYMBOL-FILTER X))).
    where: (DEFUN SYMBOL-FILTER (X)
	     (IF (SYMBOLP X)
		 (IF (NULL X) *NIL-SURROGATE* X)
                 (SYMBOL-FILTER (WRONG-TYPE-ARG-ERROR "Not a symbol" X))))
    and %PLIST is the usual open-compiled stucture reference.
[3] Make the usual changes in the evaluator, special case for T and NIL.
[4] Make #T read in as T.
[5] Recompile and reassemble the world.

So you can see that it isn't all that much work, won't slow things
down too much (mainly the evaluator), and won't make things any bigger.
Larger changes, such as changing the calling sequence of most things
in the virtual machine, have been made in NIL in the recent past.

Remember though, this will be in lisp, and not hidden away in microcode,
so users will be able to see the funny stuff going on. It won't be
as if the semantics were built-in to the hardware or engraved on
stones brought down from a mountain top.

Obviously the other more localized way of changing this is to redefine
CAR and CDR to work on the symbol NIL. Totally out of the question
in a non-microcoded implementation.


∂02-Mar-82  1406	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	I think I am missing something 
Date:  2 Mar 1982 1658-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: I think I am missing something
To: fahlman at CMU-20C
cc: common-lisp at SU-AI

In the last couple of days, I have been seeing lots of potentially
useful discussion on how difficult it is to change various programs or
dialects to fit various conventions.  However I was also interested to
see why one would want to change from the hallowed definitions of T and
NIL in the first place. One of the messages yesterday had what seemed at
first to be a good justification, and at least one person has made
comments in passing today that seem to indicate they were thinking the
same thing.  But I have a problem with it. The justification, as I
understand it, is that currently NIL is overloaded, and thus leads to
ambiguities.  The most common one is when you say (GET 'FOO), you get
back NIL, and you don't know whether this means there is no FOO
property, or there is one and its value is NIL.  I agree that this is
annoying.  However as I understand the proposal, () is going to be used
for both the empty list and Boolean false.  If so, I don't understand
how this resolves the ambiguity.  As far as I can see, the new symbol
NIL is going to be useless, except that it will help old code such as
(RETURN NIL) to work. Basically everybody is now going to use () where
they used to use NIL. As far as I can see, the same ambiguity is going
to be there.  Under the new system, FOO is just as likely to have a
value of () as it was to have a value of NIL under the old system, so I
still can't tell what is going on if (GET 'FOO) returns ().  Even if you
separate the two functions, and have a () and a #FALSE (the canonical
object indicating falsity), something that would break *very* large
amounts of code, I would think there would be a reasonable number of
applications where properties would have Boolean values.  So (GET 'FOO)
would still sometimes return #FA

∂03-Mar-82  1158	Eric Benson <BENSON at UTAH-20> 	The truth value returned by predicates    
Date:  3 Mar 1982 1228-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: The truth value returned by predicates
To: Common-Lisp at SU-AI

It seems to me that, except for those predicates like MEMBER which return a
specific value, the implementation should be allowed to return any handy
non-false value.  This is inconsequential for microcoded implementations,
but could save a great deal in "stock hardware" versions.  Whether or not
more predicates should return useful values, as Stallman suggests, is a
different matter.  My feeling is "why not?" since programmers are free to
use this feature or not, as they see fit.  I think that it might lead to
obscure code, but I wouldn't force my opinion on others if it doesn't
infringe on me.  For the same reason, I think either option 1 or 2 for
NIL/() is reasonable.  In fact, most opinions on this matter seem to be "I
prefer X but I can live with Y."  Although I think () is cleaner, I'm
inclined to agree with Hedrick that it's not that much cleaner.  It truly
pains me to go for the conservative option, but I just don't think there's
enough to gain by changing.
-------

∂03-Mar-82  1337	Eric Benson <BENSON at UTAH-20> 	The truth value returned by predicates    
Date:  3 Mar 1982 1228-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: The truth value returned by predicates
To: Common-Lisp at SU-AI

It seems to me that, except for those predicates like MEMBER which return a
specific value, the implementation should be allowed to return any handy
non-false value.  This is inconsequential for microcoded implementations,
but could save a great deal in "stock hardware" versions.  Whether or not
more predicates should return useful values, as Stallman suggests, is a
different matter.  My feeling is "why not?" since programmers are free to
use this feature or not, as they see fit.  I think that it might lead to
obscure code, but I wouldn't force my opinion on others if it doesn't
infringe on me.  For the same reason, I think either option 1 or 2 for
NIL/() is reasonable.  In fact, most opinions on this matter seem to be "I
prefer X but I can live with Y."  Although I think () is cleaner, I'm
inclined to agree with Hedrick that it's not that much cleaner.  It truly
pains me to go for the conservative option, but I just don't think there's
enough to gain by changing.
-------

∂03-Mar-82  1753	Richard M. Stallman <RMS at MIT-AI>
Date: 3 March 1982 20:33-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

Hedrick is correct in saying that distinguishing () from NIL
does not make it possible to distinguish between "no property"
and a property whose value is false, with GET.  However, I think
his message seemed to imply a significance for this fact which it does
not have.

As long as we want GET to return the value of the property, unaltered
(as opposed to returning a list containing the object, for example),
and as long as we want any object at all to be allowed as a value
of a property, then it is impossible to find anything that GET
can return in order to indicate unambiguously that there is no property.

I don't think this is relevant to the question of NIL and ().
The reasons why I think it would be good to distinguish the two
have nothing to do with GET.

It is convenient that the empty list and false are the same.  I do not
think, even aside from compatibility, that these should have been
distinguished.  The reasons that apply to NIL vs () have no
analog for the empty list vs false.

∂04-Mar-82  1846	Earl A. Killian <EAK at MIT-MC> 	T and NIL   
Date: 4 March 1982 19:01-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  T and NIL
To: FAHLMAN at CMU-20C
cc: common-lisp at SU-AI

If you're taking a poll, I prefer 2 and then 3 on the NIL issue.
The T issue I can't get too excited about.  Whatever is decided
for T, perhaps the implementation that NIL uses should be
encouraged, if it allows experimentation with a separate data type
truth value simply by setting symbols T and *:TRUTH.

∂04-Mar-82  1846	Earl A. Killian <EAK at MIT-MC> 	Fahlman's new new sequence proposal, and an issue of policy   
Date: 4 March 1982 19:08-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  Fahlman's new new sequence proposal, and an issue of policy
To: MOON at SCRC-TENEX
cc: common-lisp at SU-AI

    Date: Monday, 22 February 1982  02:50-EST
    From: MOON at SCRC-TENEX

    mumble-IF-NOT is equally as useful as mumble-IF, if you look at how they
    are used.  This is because the predicate argument is rarely a lambda, but
    is typically some pre-defined function, and most predicates do not come in
    complementary versions.  (Myself, I invariably write such things with
    LOOP, so I don't have a personal axe to grind.)

Another possibility is to define a function composition operator.
Then you'd do
	(mumble-IF ... (COMPOSE #'NOT #'SYMBOLP) ...)
instead of
	(mumble-IF ... (LAMBDA (X) (NOT (SYMBOLP X))) ...)
This is nicer because it avoids introducing the extra name X.
(Maybe the #'s wouldn't be needed?)

∂05-Mar-82  0101	Richard M. Stallman <RMS at MIT-AI> 	COMPOSE 
Date: 5 March 1982 02:27-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: COMPOSE
To: common-lisp at SU-AI

COMPOSE can be defined as a lambda macro, I think.

∂05-Mar-82  0902	Jon L White <JONL at MIT-MC> 	What are you missing?  and "patching"  ATOM and LISTP  
Date: 5 March 1982 12:01-EST
From: Jon L White <JONL at MIT-MC>
Subject: What are you missing?  and "patching"  ATOM and LISTP
To: HEDRICK at RUTGERS
cc: common-lisp at SU-AI

  Date:  2 Mar 1982 1658-EST
  From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
  Subject: I think I am missing something
  . . .  
The reasons for distinguishing NIL from () aren't related to the GET 
problem mentioned in your note;  RMS mentioned this too in his note 
of Mar 3.   In fact, since Common-Lisp will have multiple-values, the 
only sensible solution for GET (and others like it, such as the HASH-GET 
I have in a hashing package) is to return two values, the second of which 
tells whether or not the flag/attribute was found.

A more important aspect is the potential uniformity of functions which
act on lists -- there needn't be a split of code, one way to handle
non-null lists, and the other way to handle null (e.g. CAR and CDR).
In fact, I think RMS's statement of the problem on Mar 1 is quite succinct, 
and bears repeating here:
    Date: 1 March 1982 23:43-EST
    From: Richard M. Stallman <RMS at MIT-AI>
    Subject: () and T.
    I believe that () should be distinuished from NIL because
    it is good if every data type is either all true or all false.
    I don't like having one symbol be false and others true.
    Another good result from distinguishing between () and NIL is
    that the empty list can be LISTP. . . . 

However, even though it would be reasonable for CONSP to return its argument
when "true", I don't believe there is advantage to having predicates like 
ATOM and LISTP to try to return some "buggered" value for null.  There has to 
be some kind of discontinuity for any predicate which attempts to return its 
argument when "true", but which is "true" for the "false" datum;  that 
discontinuity is as bad as CAR and CDR being applicable to one special symbol 
(namely NIL).  The limiting case in this line of reasoning is the predicate 
NOT -- how could it return its argument?  Patching ATOM and LISTP for the 
argument of "false" makes as much sense to me as patching NOT.


∂05-Mar-82  0910	Jon L White <JONL at MIT-MC> 	How useful will a liberated T and NIL be?    
Date: 5 March 1982 12:09-EST
From: Jon L White <JONL at MIT-MC>
Subject: How useful will a liberated T and NIL be?
To: Hedrick at RUTGERS
cc: common-lisp at SU-AI


The following point is somewhat subsidary to your main point in the
note of Mar 2;  but it is an issue worth facing now, and one which I 
don't believe has hit the mails yet (although it has had some verbal
discussion here):
    Date:  2 Mar 1982 1658-EST
    From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
    Subject: I think I am missing something
    . . .   As far as I can see, the new symbol
    NIL is going to be useless, except that it will help old code such as
    (RETURN NIL) to work. 
As to the prospect that the symbol NIL (and the symbol T if Fahlman's
option 2 or 2A on "truthity" is taken) will become useless due to being
globally bound to null (and to #T for T), Well: Such binding is relevant 
only to old code.   New code is free to bind those symbols at will, so long 
as the new code doesn't try to call old code with **dynamic** rebindings of 
NIL and/or T.  I believe we will have local declarations in Common-Lisp, and 
a "correct" evaluator (vis-a-vis local variables), so code like
  (DEFUN FOO (PRED F T)
    (DECLARE (LOCAL F T))
    (COND (F (NULL PRED))
	  (T PRED)
	  (#T () )))
will be totally isolated from the effects of the global binding of T.


∂05-Mar-82  1129	MASINTER at PARC-MAXC 	NIL and T   
Date:  5 MAR 1982 1129-PST
From: MASINTER at PARC-MAXC
Subject: NIL and T
To:   Common-Lisp at SU-AI

Divergences in Common-Lisp from common practice in the major dialects
of Lisp in use today should be made for good reason.

The stronger the divergence, the better the reasons need to be.
The strength of the divergence can be measured by the amount of
impact a change can potentially have on an old program: 
 little or no impact (e.g., adding new functions)
 mechanically convertible (e.g., changing order of arguments)
 mechanically detectable (e.g., removing functions in favor of others)
 not mechanically detectable (e.g., changing the type of the empty list).


Good reasons can come under several categories: uniformity, 
ease of efficient implementation, usefulness of the feature,
and aesthetics.

Aesthetic arguments can be general ("I like it") or specific
("the following program is 'cleaner').


I think that changing NIL and T requires very strong reasons.
Most of the arguements for the change have been in terms of
general aesthetics. I do not believe there are strong arguments
for this divergence: the number of situations in which programs
become clearer is vanishingly small, and not nearly enough to
justify this source of confusion to most anyone who has used
most any dialect of Lisp in the last ten years.

Larry

∂05-Mar-82  1308	Kim.fateman at Berkeley 	aesthetics, NIL and T    
Date: 5 Mar 1982 12:52:43-PST
From: Kim.fateman at Berkeley
To: common-lisp@su-ai
Subject: aesthetics, NIL and T

Although the discussion would not lead one to believe this, I suspect
that at least some of the motivation is based on implementation
strategy.  That is, if NIL is an atom, and can have a property list,
then it cannot (perhaps) be stored in "location 0" of read-only memory
(or whatever hack was used to make (cdr nil) = nil).
This kind of consideration (though maybe not exactly this), would eventually
come to the surface, and unless people face up to questions like how
much does it really cost in implementation inconvenience and run-time
efficiency, we are whistling in the dark .  I reject the argument that
has been advanced
that it costs nothing in some dialects,
unless other strategies for the same machine are compared.  In
some sense, you could say that "bignum arithmetic" costs nothing in
certain lisps  "because it is done all the time anyway"! Ditto for
some kinds of debugging info.

∂05-Mar-82  2045	George J. Carrette <GJC at MIT-MC> 	I won't die if (SYMBOLP (NOT 'FOO)) => T, but really now...
Date: 5 March 1982 23:45-EST
From: George J. Carrette <GJC at MIT-MC>
Subject: I won't die if (SYMBOLP (NOT 'FOO)) => T, but really now...
To: MASINTER at PARC-MAXC
cc: COMMON-LISP at SU-AI

I have to admit that "divergences in Common-Lisp from common practice
in the major dialects in use today" doesn't concern me too much.  
Aren't there great differences amoung the lisps in fundamental areas, 
such as function calling? [E.G. The Interlisp feature such that user-defined
functions do not take well-defined numbers of arguments.]

The kind of thing that concerns me is the sapping away of productivity
caused by continous changes in a given language, and having to
continuously deal with multiple changing languages when supporting
large programming systems in those lisp dialects. I know that given a
reasonable lisp language I can write the macrology that will make it
look pretty much the way I want it to look, and stability in the
language aids in this, because then I wouldn't have to spend a lot of
effort continuously maintaining the macrolibraries.

The aesthetic considerations are then very important. For example, the more
operator overloading which is built-in to a language, and the
more things in a language which have no logical reason to be in
it other than "history," the greater the difficulty of doing the
customization of the language. Considerations of sparseness, uniformity, 
and orthogonality are aesthetics, and are known to be important in
language design.

Also, what *is* the source of confusion for a person who has
programmed in lisp for ten years? Have you seen the change in
programming style which happened in MIT Lisps in the last three or four
years, let alone ten? Have you observed the difference between the
lisp appearing in Patrick Winston's book, verses what is
COMMON-PRACTICE in the LispMachine world? Have you seen what Gerry
Sussman has been teaching to six-hundred MIT undergraduates a year?
How could one possibly be worried then about "operator retraining
considerations" over such a trivial item as the empty list not being a
symbol? My gosh, haven't you heard that COMMON-LISP is going for
lexical-scoping in a big way? What about "operator retraining" for that?

-gjc

∂05-Mar-82  2312	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Lexical Scoping 
Date: 6 Mar 1982 0211-EST
From: Scott E. Fahlman <FAHLMAN at CMU-20C>
To: GJC at MIT-MC, common-lisp at SU-AI
Subject: Lexical Scoping
Message-ID: <820205021146FAHLMAN@CMU-20C>
Regarding: Message from George J. Carrette <GJC at MIT-MC>
              of 5-Mar-82 2345-EST

Before you all panic over GJC's comment and go running off on yet
another tangent, why don't we wait and see what Guy proposes on the
lexical-scoping issue.  I suspect that it won't be super-radical.

The debate among two or three people is interesting, but I would really
like to hear from anyone else out there who has a strong opinion on
the T/NIL issue.  Are there any Lisp Machine people besides RMS who
care about this?  Even if you are standing pat on the position you
stated before, it would be useful to get a confirmation of that.

-- Scott
   --------

∂06-Mar-82  1218	Alan Bawden <ALAN at MIT-MC> 	What I still think about T and NIL 
Date: 6 March 1982 15:17-EST
From: Alan Bawden <ALAN at MIT-MC>
Subject: What I still think about T and NIL
To: common-lisp at SU-AI
cc: FAHLMAN at CMU-20C

    Date: 6 Mar 1982 0211-EST
    From: Scott E. Fahlman <FAHLMAN at CMU-20C>

    The debate among two or three people is interesting, but I would really
    like to hear from anyone else out there who has a strong opinion on
    the T/NIL issue.  Are there any Lisp Machine people besides RMS who
    care about this?  Even if you are standing pat on the position you
    stated before, it would be useful to get a confirmation of that.

I must have started to send a message about this T and NIL issue at least 5
times now, but each time I stop myself because I cannot imagine that it will	;
change anybody's mind about anything.  But since you ask, I still feel that 

∂06-Mar-82  1251	Alan Bawden <ALAN at MIT-MC> 	What I still think about T and NIL 
Date: 6 March 1982 15:50-EST
From: Alan Bawden <ALAN at MIT-MC>
Subject: What I still think about T and NIL
To: common-lisp at SU-AI
cc: FAHLMAN at CMU-20C

Sorry about the fragment I just sent to you all.  I tried to stop it, but
COMSAT is quicker than I am.

I must have started to send a message about this T/NIL issue at least 5 times
now, but each time I stop myself because I cannot imagine that it will change
anybody's mind about anything.  (You might not have even gotten this one if I
hadn't accidentally sent a piece of it.)  But since you ask, I still feel that
the idea of changing the usage of T and NIL is a total waste of everybody's
time.  The current discussion seems unlikely to resolve anything and finding it
in my mailbox every day is just rubbing me in the wrong direction.  I don't see
where the morality and cleanliness of () even comes close to justifying its
incompatibility, and I seem to remember that Common Lisp was supposed to be
more about compatibility than morality.

∂06-Mar-82  1326	Howard I. Cannon <HIC at MIT-MC> 	T/NIL 
Date: 6 March 1982 16:26-EST
From: Howard I. Cannon <HIC at MIT-MC>
Subject:  T/NIL
To: FAHLMAN at CMU-20C
cc: GJC at MIT-MC, common-lisp at SU-AI

I am still violently against changing it from what we have now.

I don't remember the numbers, but NIL should be a symbol, and false,
and the empty list, and CAR/CDRable, and T should be canonical truth.

∂06-Mar-82  1351	Eric Benson <BENSON at UTAH-20> 	CAR of NIL  
Date:  6 Mar 1982 1446-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: CAR of NIL
To: Common-Lisp at SU-AI

I can understand why one would want to be able to take the CDR of NIL, but
why in the world should CAR of NIL be defined?  That seems like it's just
making sloppy programming safe.  Why is NIL more sensible for the CAR of NIL
than any other random value?  Please excuse the tirade, I was just getting used
to the idea of the CDR of NIL being NIL.
-------

∂06-Mar-82  1429	KIM.jkf@Berkeley (John Foderaro) 	t and nil  
Date: 6-Mar-82 14:15:22-PST (Sat)
From: KIM.jkf@Berkeley (John Foderaro)
Subject: t and nil
Via: KIM.BerkNet (V3.73 [1/5/82]); 6-Mar-82 14:15:22-PST (Sat)
To: fahlman@cmu-20c
Cc: common-lisp@su-ai

  I see no reason to change the current meanings of t and nil.  I consider
the fact that nil is the empty list and represents false to be one of the 
major features of the language and definitely not a bug.  I've read
over the many letters on the subject and I still don't understand
what the benefit of () and #t are?  I would like to see lots and lots
of concrete examples where using () and #t improve the code.  If the
proponents of the change can't provide such examples, then they are 
attempting to solve a non-problem.
  Aesthetically, I much prefer 'nil' to () [just as I prefer (a b c)
to (a . (b . (c . nil))) ]

  I hope that the  common-lisp committee goes back to the task
of desribing a common subset of existing lisp dialects for the 
purpose of improving lisp software portability.  The lisp language
works, there is lots of software to prove it.  Please
leave lisp alone.


∂06-Mar-82  1911	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: CAR of NIL  
Date:  6 Mar 1982 2208-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: CAR of NIL
To: BENSON at UTAH-20
cc: Common-Lisp at SU-AI
In-Reply-To: Your message of 6-Mar-82 1646-EST

The usefulness of CAR and CDR of NIL may depend upon the dialect.  In
Interlisp and R/UCI Lisp it allows one to pretend that data structures,
including function calls, have components that they do not in fact have.
E.g. at least in R/UCI Lisp, optional arguments fall out automatically
from this convention.  Suppose FOO has two arguments, but you call (FOO
A).  When the interpeter or compiler attempts to find the second
argument, they get NIL, because (CADDR '(FOO A)) is NIL under the (CAR
NIL) = NIL, (CDR NIL) = NIL rule.  This has the effect of making NIL an
automatic default value.  In practice this works most of the time, and
avoids a fair amount of hair in implementing real default values and
optional args.  Similar things can be done with user data structures.
It seems fairly clear to me that if (CDR NIL) is to be NIL, (CAR NIL)
must be also, since typically what you really want is that (CADR NIL),
(CADDR NIL), etc., should be NIL. Whether all of this is as important in
MAClisp is less clear.  MAClisp allows explicit declaration of optional
arguments, and if they are not declared, then presumably we want to
treat missing args as errors.  Similarly, Common Lisp will have much
more flexible record structures than the old R/UCI Lisp did (though
Interlisp of course has similar features). It seems to me that if people
write programs using the modern structuring concepts available in Common
Lisp, CAR and CDR NIL will again not be necessary for user data
structures.  Thus as an attempt to find errors as soon as possible, one
might prefer it to be considered an error. It is my impression that CAR
and CDR NIL are being suggested to help compatibility with existing
implementatins, and that *VERY* large amounts of code that depend upon
it.  One would probably not do it if designing from scratch.

-------

∂06-Mar-82  2306	JMC  
Count me against (car nil) and (cdr nil).

∂06-Mar-82  2314	Eric Benson <BENSON at UTAH-20> 	Re: CAR of NIL   
Date:  7 Mar 1982 0010-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: Re: CAR of NIL
To: HEDRICK at RUTGERS
cc: Common-Lisp at SU-AI
In-Reply-To: Your message of 6-Mar-82 2008-MST

Thanks.  I figured there was a semi-sensible-if-archaic explanation for it.
If the thing has to have a CAR as well as a CDR, I guess I'll change my
vote from NIL to ().  From an implementor's standpoint, it's not too tough
for the CDR of NIL to be NIL; just put the value cell of an ID record in
the same position as the CDR cell in a pair record.  It's rather slim
grounds for choosing the layout, but these things tend to be rather
arbitrary anyway.  If it has to have 2 fields dedicated to NIL, things get
hairier.  One could put the property list cell of an ID in the CAR
position, but then of course NIL's real property list has to go somewhere
else, and we need special code in property list accessing for NIL.  If it
has to be special-cased, there's probably a more intelligent way to do it.
I'd rather have a separate data type that looks like a pair, even if it
means losing one more precious tag.
-------

∂07-Mar-82  0923	Daniel L. Weinreb <dlw at MIT-AI> 	Re: CAR of NIL 
Date: 7 March 1982 12:17-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: Re: CAR of NIL
To: BENSON at UTAH-20
cc: Common-Lisp at SU-AI

I'd like to point out that the justification you give for your vote is
purely in terms of estimated implementation difficulty.

∂07-Mar-82  1111	Eric Benson <BENSON at UTAH-20> 	Re: CAR of NIL   
Date:  7 Mar 1982 1209-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: Re: CAR of NIL
To: dlw at MIT-AI
cc: Common-Lisp at SU-AI
In-Reply-To: Your message of 7-Mar-82 1017-MST

True enough.  Standard Lisp defines () as NIL, but its CAR and CDR are
illegal.  I don't see the conversion to () as a great effort, mainly just
a matter of finding cases of 'NIL.  Since I don't have an ideological axe to
grind, I see the issue as the cost of converting old code vs. the cost to
new implementations of overloading NIL.
-------

∂07-Mar-82  1609	FEINBERG at CMU-20C 	() vs NIL
Date: 7 March 1982  19:11-EST (Sunday)
From: FEINBERG at CMU-20C
To:   Common-Lisp at SU-AI
Subject: () vs NIL

Howdy!
	I am strongly in favor of proposal #2, () should be the
representation of the empty list and falsehood.  The symbol NIL would
be permanently bound to () for compatibility purposes.  Any reasonable
Maclisp code would still work fine wrt. this change.  Certainly people
converting Maclisp code have much more dramatic changes to deal with,
like forward slash turning into backward slash (/ => \).  Unless
someone can come up with some reasonable code which would break with
this change, I would claim that compatibility is not an issue here,
and so we should go with what seems to me as a better way to represent
the empty list and false.  Is there any reason why people are against
this, aside from inertia?

∂07-Mar-82  2121	Richard M. Stallman <RMS at MIT-AI>
Date: 8 March 1982 00:10-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

When Maclisp was changed to make the car and cdr of nil be nil,
it was partly to facilitate transporting Interlisp code,
but mostly because people thought it was an improvement.
I've found it saves me great numbers of explicit tests of nullness.
I don't think that any other improvements in data structure facilities
eliminate the usefulness of this.  I still appreciate it on the Lisp machine
despite the presence of defstruct and flavors.

∂08-Mar-82  0835	Jon L White <JONL at MIT-MC> 	Divergence
Date: 8 March 1982 11:29-EST
From: Jon L White <JONL at MIT-MC>
Subject: Divergence
To: Masinter at PARC-MAXC
cc: common-lisp at SU-AI


You raise an extremely important point;  the slow evolution of Lisp
which has taken place over the past 10 years has been mostly "conservative"
(i.e., upward-compatible including bugs and misfeatures).  The several
"radical" departures from basic Lisp failed to get wide acceptance for just
that reason -- e.g., XLISP at Rutgers and MDL here at MIT.

    Date:  5 MAR 1982 1129-PST
    From: MASINTER at PARC-MAXC
    Divergences in Common-Lisp from common practice in the major dialects
    of Lisp in use today should be made for good reason.
    The stronger the divergence, the better the reasons need to be.
    The strength of the divergence can be measured by the amount of
    impact a change can potentially have on an old program: 
     little or no impact (e.g., adding new functions)
     mechanically convertible (e.g., changing order of arguments)
     mechanically detectable (e.g., removing functions in favor of others)
     not mechanically detectable (e.g., changing the type of the empty list).
    . . . 

However, I'd like to remind the community that COMMON-LISP was never
intended to be merely the merger of all existing MacLISP-like dialects.
Our original goal was to define a stable subset which all these
implementions could support, and which would serve as a fairly complete
medium for writing transportable code.  Note the important items: 
	stability
	transportability  (both "stock" and special-purpose hardware)
	completeness      (for user, not necessarily for implementor)
	good new features
Each implementation has to "give a little" for this to be a cooperative 
venture; I certainly hope that no one group would be refractory to
another group's issues.

Previous notes from RMS and myself tried to make the case, as succinctly
as possible, for () vs NIL;  these arguments may be better appreciated 
by a relative newcomer to Lisp [and it is the future generations who will 
benefit from the "fixes" applied now].  I believe that many in the current
user/implementor community *** who have already adapted themselves to the 
various warts and wrinkles of Lisp *** have overestimated the cost of
the NIL/() change and underestimated the impact of the "warts" on future 
generations.  

My note titled "How useful will a liberated T and NIL be?"
attempts to show that only the worst malice-aforethought cases will
cause problems, despite the potential loophole for failure at
mechanical conversion.  As Benson put it, probably the only place
where the "compatibility" approach [i.e., setq'ing NIL to ()] may
fail is in instances of "'NIL", and similar constructs.

∂08-Mar-82  1904	<Guy.Steele at CMU-10A>  	There's a market out there...
Date:  8 March 1982 2203-EST (Monday)
From: <Guy.Steele at CMU-10A> 
To: bug-macsyma at MIT-MC, common-lisp at SU-AI
Subject:  There's a market out there...

From today's Pittsburgh Press:

  Dear Consumer Reports:  After 35 years, I'm studying algebra again.
Can you recommend a calculator out of the many available that would be
reasonably suited to solving algebra problems?
  I already have several calculators that are suited to basic arithmetic
but not much more.

  Dear Reader:  Nearly all calculators do arithmetic: You plug in the numbers
and you get an answer.
  Most of them will not do algebra.  They will not factor; they will not
solve equations.  Only some special programmable calculators have those
algebraic capabilities.

--Guy

∂10-Mar-82  2021	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Vectors and Arrays   
Date: 10 Mar 1982 2318-EST
From: Scott E. Fahlman <FAHLMAN at CMU-20C>
To: common-lisp at SU-AI
Subject: Vectors and Arrays
Message-ID: <820209231809FAHLMAN@CMU-20C>


There is yet another rather fundamental issue that we need to discuss:
how to handle vectors and arrays.  It is not clear to me what, if
anything, was decided at the November meeting.  There is a line in the
"Decisions" document indicating that henceforth vector is to be a
subtype of array, but this could mean any number of things, some of them
reasonable and some of them not.  Let me try briefly to spell out the
issues as I see them and to propose a possible solution.

First, let me explain the rationale for making vectors and arrays
distinct types in the Swiss Cheese edition.

In non-microcoded implementations (which Common Lisp MUST accommodate if
it is to be at all common), it is important to have a very simple vector
data type for quick access.  Features that are only used occasionally
and that make access more expensive should not be included in the
simplest kind of vector.  That means leaving out fill pointers and the
ability to expand vectors after they are allocated, since the latter
requires an extra level of indirection or some sort of forwarding
pointer.  These simple vectors are referenced with VREF and VSET, which
tip off the compiler that the vector is going to be a simple one.
Bit-vectors and strings must also be of this simple form, for the same
reason: they need to be accessed very efficiently.

Given a vector data type, it is very straightforward to build arrays
into the system.  An array is simply a record structure (built from a
vector of type T) that contains slots for the data vector (or string),
the number of indices, the range of each index, a fill pointer, perhaps
some header slots, and so on.  The actual data is in a second vector.
Arrays are inherently more expensive to reference (using AREF and ASET),
and it seems to me that this is where you want to put the frills.  The
extra level of indirection makes it possible to expand the array by
allocating a new data vector; the expanded array (meaning the header
vector) is EQ to the original.  A fill pointer adds negligible expense
here, and makes sense since the array is able to grow.  (To provide fill
pointers without growability is pretty ugly.)

So, the original proposal, as reflected in Swiss Cheese, was that
vectors and arrays be separate types, even if the array is 1-D.  The
difference is that arrays can be expanded and can have fill pointers and
headers, while vectors cannot.  Strings and bit-vectors would be
vectors; if you want the added hair, you create a 1-D array of bits or
of characters.  VREF only works on vectors and can therefore be
open-coded efficiently; there is no reason why AREF should not work on
both types, but the array operations that depend on the fancy features
will only work on arrays.

The problem is that the Lisp Machine people have already done this the
opposite way: they implement arrays as the fundamental entity, complete
with headers, fill pointers, displacement, and growability.  There is no
simpler or cheaper form of vector-like object available on their system.
(I think that this is a questionable decision, even for a microcoded
system, but this is not the forum in which to debate that.  The fact
remains that their view of arrays is woven all through Zetalisp
and they evidently do not want to change it.)

Now, if the Lisp Machine people really wanted to, they could easily
implement the simpler kind of vector using their arrays.  There would
simply be a header bit in certain 1-D arrays that marks these as vectors;
arrays so marked would not be growable and could into be given headers or
fill pointers.  Those things that we would have called vectors in the
original scheme would have this bit set, and true arrays would not.

I was not at the November meeting, but I gather that the Lisp Machine
folks rejected this suggestion -- why do extra work just to break
certain features that you are already paying for and that, in the case
of strings, are already being used in some places?  The position stated
by Moon was that there would be no distinction between vectors and any
other 1-D arrays in Zetalisp.  However, if we simply merge these types
throughout Common Lisp, the non-microcoded implementations are screwed.

Could everyone live with the following proposal?

1. Vector is a subtype of Array.  String and Bit-Vector are subtypes of
Vector.

2. AREF and ASET work for all arrays (including the subtypes listed
above).  The generic sequence operators work for all of the above, but
only for 1-D arrays.  (I believe that the proposal to treat multi-D
arrays as sequences was voted down.)

3. VREF and VSET work only for vectors, including Strings and
Bit-Vectors.

4. We need a new predicate (call it "EXTENSIBLEP"?).  If an array is
extensible, then one can grow it, give it a fill pointer, displace it,
etc.

5. In the Common Lisp spec, we say that vectors and their subtypes
(including strings) are not, in general, extensible.  The arrays
created by MAKE-ARRAY are extensible, at least as a default.  Thus, in
vanilla Common Lisp, users could choose between fast, simple vectors and
strings and the slower, extensible 1-D arrays.

6. Implementations (including Zetalisp) will be free to make vectors
extensible.  In such implementations, all arrays would be extensible and
there would be no difference between vectors and 1-D arrays.
Implementations that take this step would be upward-compatible supersets
of Common Lisp.  Code from vanilla implementations can be ported to
Zetalisp without change, and nothing will break; the converse is not
true, of course.  This is just one of the ways in which Zetalisp is a
superset, so we haven't really given anything up by allowing this
flexibility.

7. It would be nice if the superset implementations provided a
"compatibility mode" switch which would signal a (correctable) runtime
error if a vector is used in an extensible way.  One could turn this on
in order to debug code that is meant to be portable to all Common Lisp
implementations.  This, of course, is optional.
   --------

∂10-Mar-82  2129	Griss at UTAH-20 (Martin.Griss) 	Re: Vectors and Arrays
Date: 10 Mar 1982 2225-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Re: Vectors and Arrays
To: FAHLMAN at CMU-20C
cc: Griss
In-Reply-To: Your message of 10-Mar-82 2118-MST
Remailed-date: 10 Mar 1982 2227-MST
Remailed-from: Griss at UTAH-20 (Martin.Griss)
Remailed-to: common-lisp at SU-AI

Seems a pity not have the VECTORS as the basic, "efficient" type, build arrays
as proposed; the other model of "compatibility" just to avoid change to ZetaLISP
seems the wrong approach; once again it adds and "institutionalizes" the
large variety of alternatives that tend to make the task of defining and
implementing a simple kernel more difficult.
-------

∂10-Mar-82  2350	MOON at SCRC-TENEX 	Vectors and Arrays--briefly   
Date: Thursday, 11 March 1982  02:38-EST
From: MOON at SCRC-TENEX
To:   Scott E. Fahlman <FAHLMAN at CMU-20C>
Cc:   common-lisp at SU-AI
Subject: Vectors and Arrays--briefly

In the Lisp machine (both of them), those arrays that have as few
features as vectors do are implemented as efficiently as vectors
could be.  Thus there would be no advantage to adding vectors, and
all the usual disadvantages of having more than one way of doing
something.  The important difference between Lisp computers and
Fortran computers is that on the former it costs nothing for ASET
to check at run time whether it is accessing a simple array or
a complex one, while on the latter this decision must be made at
compile time.  Hence vectors.  Since vectors add nothing to the
language on our machine, we would prefer to keep whatever is put in
for them as unobtrusive as possible in order to avoid confusing our
users with unnecessary multiple ways of doing the same thing.  Of
course, we are willing to put in functions to allow portability
to and from implementations that can't get along without vectors.

A second issue is that there are very few programs that use strings
for anything more than you can do with them in Pascal (i.e. print
them out) that would be portable to implementations that did not
permit strings with fill-pointers.  The important point here is that
it needs to be possible to create an object with a fill-pointer on
which the string-specific functions can operate.  This could be
done either by making those functions accept arrays or by making
vectors have fill-pointers.  This was discussed at the November
meeting; if my memory is operating correctly the people with
non-microcoded implementations (the only ones who care) opted for
making vectors have fill-pointers on the theory that it would be
more efficient than the alternative.  I believe it is the case that
it is really the string-specific functions we are talking about here,
not just the generic sequence functions.

To address the proposal.  1 and 2 are okay.  It is inconvenient to
enforce 3 in compiled code on the Lisp machine, since we would have
to add new instructions solely for this purpose.  It's unclear what
4 means (but presumably if it was clarified there would be no problem
in implementing it, other than the possibility that vectors might
become less efficient than arrays on the Lisp machine because of the
need to find a place in the array representation to remember that
they are vectors).  5 is okay except that the portable subset really
needs strings with fill-pointers (extensibility is also desirable,
but very much less important).  6 and 7 are okay (but see 3).

To me the important issue is that to the user of a Lisp machine,
vectors and VREF are not something he has to worry about except
under the heading of writing portable code.

∂11-Mar-82  1829	Richard M. Stallman <RMS at MIT-AI>
Date: 11 March 1982 20:13-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

The distinction between vectors and arrays is only a compromise
for the sake of old-fashioned architectures.  It is much less
clean than having only one kind of object.  It is ok for the
Lisp machine to accommodate to this compromise by defining
a few extra function names, which will be synonyms of existing
functions on the Lisp machine, but would be different from those
existing functions in other implementations.  But it would be
bad to implement any actual distinction between "vectors"
and "arrays".

∂12-Mar-82  0825	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Re: Vectors and Arrays    
Date: 12 Mar 1982 1117-EST
From: Scott E. Fahlman <FAHLMAN at CMU-20C>
To: DLW at MIT-AI
cc: common-lisp at SU-AI
Subject: Re: Vectors and Arrays
Message-ID: <820211111735FAHLMAN@CMU-20C>
Regarding: Message from Daniel L. Weinreb <DLW at MIT-AI>
              of 11-Mar-82 1216-EST

This is basically a re-send of a message I sent yesterday, attempting to
clarify the Vector/Array proposal.  Either I or the mailer seems to
have messed up, so I'm trying again.  If you did get the earlier
message, I apologize for the redundancy.

To clarify the proposal: The Common Lisp spec would require that VREF
and VSET work for all vectors.  If they also work for other kinds of
arrays in Zetalisp (i.e. they just translate into AREF and ASET), that
would be OK -- another way in which Zetalisp is a superset.  As with the
business about extensibility, it would be nice to have a compatibility
mode in which VREF would complain about non-vector args, but this is not
essential.  Note also that Zetalisp users could continue to write all
their code using AREF/ASET instead of VREF/VSET; if they port this code
to a "Fortran machine" it would still work, but would not be optimally
fast.

The whole aim of the proposal is to allow Zetalisp to continue to build
arrays their way, while not imposing inefficiency on non-microcoded
implementations.  So we would definitely provide accessing and modifying
primitives for getting at fill-pointers and the like.  Legal Common Lisp
code would not get at such things by looking in slot 27 of the array
header vector, or whatever.

I would not be violently opposed to requiring all vectors (including
strings) to have a fill pointer.  This would cost one extra word per
vector, but the total overhead would be small.  It would not really cost
extra time per access, since we would just bounds-check against the
fill-pointer instead of the allocated length.  If a compiler wants to
provide (as an option, not the default) a maximum-speed vector access
without bounds checking, it could still do so, and would run roughly the
same set of risks.  (Probably this is unwise in any event.)  So the cost
of fill pointers is really not so bad.  The reason we left them out was
because it seemed that providing a fill-pointer in a non-growable vector
was not a useful or clean thing to do.  And allowing vectors to grow
really is a significant added expense without forwarding pointers in the
hardware.

Do the Zetalisp folks really want fill pointers in non-growable strings,
or would it be better to go with mostly simple strings, with character
arrays around for when you want an elastic editor buffer or something?

-- Scott
   --------

∂12-Mar-82  1035	MOON at SCRC-TENEX 	Re: Vectors and Arrays   
Date: Friday, 12 March 1982  13:11-EST
From: MOON at SCRC-TENEX
To:   Scott E. Fahlman <FAHLMAN at CMU-20C>
Cc:   common-lisp at SU-AI
Subject: Re: Vectors and Arrays

Yes, we want fill-pointers in non-growable strings.  I think I said this
in my message anyway.  Actually it only takes about 15 seconds to figure
out how to have two kinds of vectors, one with fill pointers and one
without, while still being able to open-code VREF, VSET, and
VECTOR-ACTIVE-LENGTH in one instruction (VECTOR-LENGTH, on the other hand,
would have to check which kind of vector it was given).  So the extra
storage is not an issue in any case.

∂14-Mar-82  1152	Symbolics Technical Staff 	The T and NIL issues   
Date: Sunday, 14 March 1982  14:40-EST
From: Symbolics Technical Staff
Reply-to: Moon@SCRC-TENEX@MIT-MC
To:   Common-Lisp at SU-AI
Subject: The T and NIL issues

I'm sorry this message has been so long delayed; my time has been
completely occupied with other projects recently.

We have had some internal discussions about the T and NIL issues.  If we
were designing a completely new language, we would certainly rethink these,
as well as the many other warts (or beauty marks) in Lisp.  (We might not
necessarily change them, but we would certainly rethink them.)  However,
the advantages to be gained by changing T and NIL now are quite small
compared to the costs of conversion.  The only resolution to these issues
that Symbolics can accept is to retain the status quo.

To summarize the status quo:  NIL is a symbol, the empty list, and the
distinguished "false" value.  SYMBOLP, ATOM, and LISTP are true of it;
CONSP is not.  CAR, CDR, and EVAL of NIL are NIL.  NIL may not be used
as a function nor as a variable.  NIL has a property list.  T is a symbol
and the default "true" value used by predicates that are not semi-predicates
(i.e. that don't return "meaningful" values when they are true.)  EVAL of
T is T.  T may not be used as a variable.  T is a keyword recognized by
certain functions, such as FORMAT.

The behavior of LISTP is a change to the status quo which we agreed to long
ago, and would have implemented long ago if we weren't waiting for Common
Lisp before making any incompatible changes.  The status quo is that NIL
has a property list, however this point is probably open to negotiation if
anyone feels strongly that the property-list functions should error when
given NIL.

The use of T as a syntactic keyword in CASEQ and SELECTQ should not be
carried over into their Common Lisp replacement, CASE.  It is based on a
misunderstanding of the convention about T in COND and certainly adds
nothing to the understandability of the language.

T and NIL are just like the hundreds of other reserved words in Lisp,
dozens of which are reserved as variables, most of the rest as functions.
Any particular program that wants to use these names for ordinary symbols
rather than the special reserved ones can easily do so through the use of
packages.  There should be a package option in the portable package system
by which the reserved NIL can be made to print as "()" rather than
"GLOBAL:NIL" when desired.

∂14-Mar-82  1334	Earl A. Killian <EAK at MIT-MC> 	The T and NIL issues  
Date: 14 March 1982 16:34-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  The T and NIL issues
To: Moon at SCRC-TENEX
cc: Common-Lisp at SU-AI

There is certainly one advantage to having () not be a symbol for
Common Lisp (though not for the Lisp Machine), and that's
implementation and efficiency.  The last time this came up, DLW
pointed out that having the CAR and CDR of a symbol be () was
only an implementation detail, as if that made it unimportant.
Now I understand that many Common Lisp decisions have given
implementation a back seat to aesthetics, but here's a case where
most people (except HIC) think the aesthetics call for the change
(the usual argument against the change is compatibility, not
aesthetics -- you even said in a completely ne language, you
would rethink them).

You said "The only resolution to these issues that Symbolics can
accept is to retain the status quo", but you didn't say why.
Why?  If compatibility is the only reason, then why isn't the
reader hack of NIL => () acceptable?  I just don't believe many
programs depend on (SYMBOLP NIL).

What if others don't want to kludge up their implementation, and
so the only thing they can accept is a change in the status quo?

∂14-Mar-82  1816	Daniel L. Weinreb <dlw at MIT-AI> 	Re: Vectors and Arrays   
Date: Sunday, 14 March 1982, 18:27-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: Re: Vectors and Arrays
To: FAHLMAN at CMU-20C
Cc: common-lisp at SU-AI

What I especially wanted to see clarified was this: you said that arrays
could be thought of as being implemented as a vector, one of whose
elements is another, internal vector, that holds the real values of the
elements.  Are you proposing that there be a primitive to access this
internal vector?  Such a primitive might be hard to implement if arrays
are not really implemented the way you said.  (I'm not saying we can't
do it; I don't know for sure whether it's very hard or not.  I just
wanted to know what you were proposing.)

∂14-Mar-82  1831	Jon L White <JONL at MIT-MC> 	The T and NIL issues (and etc.)    
Date: 14 March 1982 21:31-EST
From: Jon L White <JONL at MIT-MC>
Subject:  The T and NIL issues (and etc.)
To: moon at SCRC-TENEX
cc: common-lisp at SU-AI


The msg of following dateline certainly describes well the status
quo in MacLISP (both PDP10 and LISPM), as well pointing out the
T is special-cased in CASE clauses.  
    Date: Sunday, 14 March 1982  14:40-EST
    From: Symbolics Technical Staff
    Reply-to: Moon@SCRC-TENEX@MIT-MC
    To:   Common-Lisp at SU-AI
    Subject: The T and NIL issues
    . . . 
But as EAK says, there  is no reasoning given, beyond the authors' 
personal preference, for retaining the "wart" of NIL = ().

One comment from that msg deserves special attention:
    T and NIL are just like the hundreds of other reserved words in Lisp,
    dozens of which are reserved as variables, most of the rest as functions.
    . . . 
Why should even dozens of user-visible variables be reserved?  This is one 
of the strongest complaints against LISP heard around some MIT quarters --
that it has become too hairy, and the presence of the LISPManual doesn't
help any.  And again, even if there be many "reserved" names for functions, 
the separabililty of function-cell/value-cell makes this irrelevant to the 
T/NIL issue.  

Perhaps the package system could "hide" more of the systemic 
function/variables, but why should it come up now?  The notion of 
lexically-scoped variables, as mentioned in my note
    Date: 5 March 1982 12:09-EST
    From: Jon L White <JONL at MIT-MC>
    Subject: How useful will a liberated T and NIL be?
    To: Hedrick at RUTGERS
indicates that the variable T (and indeed NIL too) can be fully useful, 
even if its global value serves in its present "status quo" capacity.  
E.g., in
  (DEFUN FOO (PRED F T)
    (DECLARE (LOCAL F T))
    (COND (F (NULL PRED))
	  (T PRED)
	  (#T () )))
the local declaration will totally isolate "T" from the effects of any
global binding.

∂14-Mar-82  1947	George J. Carrette <GJC at MIT-MC> 	T and NIL
Date: 14 March 1982 22:48-EST
From: George J. Carrette <GJC at MIT-MC>
Subject: T and NIL
To: EAK at MIT-MC
cc: common-lisp at SU-AI

Efficiency really isn't an issue here because it is very easy to get
CAR and CDR of a symbol NIL to be NIL. Take VAX-NIL for instance,
symbols have two value-cells, so its easy to make CAR access one of
the cells, and CDR the other. One could even arrange to have the symbol
structure reside across a page boundary, so the CAR/CDR cells would be
on a read-only-page, and the function cells, PLIST, and PNAME would be
on a read-write-page. There would be an average of one instruction more
executed in the error-checking version of CAR and CDR. For the benefit
of other lisps I would recomend that the function cell be pure too though.

However, it is interesting that the overloading *was* relatively costly
in terms of codesize for various open-coded primitives in Maclisp.
Doubling the number of instructions for TYPEP, triple for PLIST,
50% more for SYMBOLP. Of course there was a time not very long ago,
(see the "Interim Lisp Manual" AI MEMO by JONL) when the 18 bit address
space of the pdp-10 was said to be more than anyone could want.


∂14-Mar-82  2046	Jon L White <JONL at MIT-MC> 	Why Vectors? and taking a cue from SYSLISP   
Date: 14 March 1982 23:08-EST
From: Jon L White <JONL at MIT-MC>
Subject:  Why Vectors? and taking a cue from SYSLISP
To: fahlman at CMU-10A
cc: COMMON-LISP at SU-AI


This note is probably the only comment I'll have to say  during this round
(via mail) on the "Vectors and Arrays" issue.  There is so much in 
the mails on it now that I think we should have had more face-to-face 
discussions, preferably by a small representative group which could
present its recommendations.

Originally, the NIL proposal wanted a distinction between
ARRAYs, with potentially hairy access methods, and simple
linear index-accessed data, which we broke down into three 
prominent cases of VECTORs of "Q"s, STRINGs of characters, 
and BITStrings of random data.  The function names VREF,
CHAR, and BIT/NIBBLE are merely access methods, for there is 
a certain amount of "mediation" that has to be done upon 
access of a sequence with packed elements.  Admittedly, this 
distinction is non-productive when micro-code can select the 
right access method at runtime (based on some internal structure 
of the datum), but it is critical for efficient open-compilation
on stock hardware.  So much for history and rationale.

Some of the discussion on the point now seems to be centered
around just exactly how these data structures will be implemented,
and what consequences that might have for the micro-coded case.
E.g., do we need two kinds of VECTORs?  I don't think so, but in 
order to implement vectors to have the "growabililty" property it may 
be better to drop the data format of the existing NIL implementations
(where the length count is stored in the word preceeding the data)
For instance, if vectors (all kinds: Q, character, and bit) are 
implemented as a fixed word header with a count/active component and 
an address component then the following advantages(+)/disadvantages(-) 
can be seen:
  1+) Normal user code has type-safe operations on structured data
      (at least in interpreter and close-compiled code)
  2+) "system" type code can just extract the address part, and
      deal with the data words almost as if the code were written 
      in a machine-independent systems language (like "C"?)  I think
      the SYSLISP approach of the UTAH people may be somewhat like this.
  3-) Access to an individual element, by the normal user-level functions,
      is slower by one memory reference;  but this may be of lesser
      importance if most "heavy" usage is through system functions like
      STRING-REVERSE-SEARCH.  There is also room for optimization
      by clever compilers to bypass most of the "extra" time.
  4-) use of "addresses", rather than typed data is a loophole in
      the memory integrity of the system;  but who needs to protect
      the system programmer from himself anyway.
  5+) hardware forwarding pointers wouldn't be necessary to make
      growability and sharability work -- they work by updating the
      address and length components of the vector header;  true, there 
      would not be full compatibility with forwarding-pointer 
      implementations (installing a new "address" part loses some
      updates that wouldn't be lost under forwarding pointers), but
      at least NSUBSTRING and many others could work properly.
  6-) without micro-code, it would probably be a loss to permit random
      addresses (read, locatives) into the middle of vectors; thus
      sharability would probably require a little extra work somewhere so 
      that the GC wouldn't get fould up.  Shared data may need to be
      identified.  This can be worked out.
  7+) even "bibop" implementations with generally-non-relocating GC can 
      implement these kinds of vectors (that is, with "headers") without 
      much trouble.
  8+) it will be easier to deal with chunks of memory allocated by a
      host (non-Lisp) operating system this way;  e.g. a page, whereby
      any "header" for the page need not appear at any fixed offset
      from the beginning of the  page.

As far as I can see, retaining the NIL idea of having header information 
stored at fixed offset from the first address primarily alleviates point 3 
above.  It also permits two kinds of vectors (one with and one without 
header information) to be implemented so that the same open-coded accessing 
sequence will work for both.  I think we may be going down a wrong track by 
adhering to this design, which is leading us to two kinds of vectors.   
The SYSLISP approach, with possibly additional "system" function names
for the various access methods, should be an attractive alternative.
[DLW's note of
    Date: Sunday, 14 March 1982, 18:27-EST
    From: Daniel L. Weinreb <dlw at MIT-AI>
    Subject: Re: Vectors and Arrays
    To: FAHLMAN at CMU-20C
seems to indicate his confusion about the state of this question -- it
does need to be cleared up.]

∂14-Mar-82  2141	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Re: Vectors and Arrays    
Date: 15 Mar 1982 0037-EST
From: Scott E. Fahlman <FAHLMAN at CMU-20C>
To: dlw at MIT-AI
cc: common-lisp at SU-AI
Subject: Re: Vectors and Arrays
Message-ID: <820214003722FAHLMAN@CMU-20C>
Regarding: Message from Daniel L. Weinreb <dlw at MIT-AI>
              of 14-Mar-82 1842-EST

No, I am not proposing that there be primitives to access the data vector
of an array in user-level Common Lisp.  An implementation might, of course,
provide this as a non-portable hack for use in writing system-level stuff.
At the user level, the only way to get at the data in an array is through
AREF.

-- Scott
   --------

∂17-Mar-82  1846	Kim.fateman at Berkeley 	arithmetic
Date: 17 Mar 1982 17:00:32-PST
From: Kim.fateman at Berkeley
To: steele@cmu-10
Subject: arithmetic
Cc: common-lisp@su-ai

Major argument against providing log(-1) = #c(0 3.14...):
(etc)

It provides a violation of log(a*b) = log(a)+log(b), which most
people expect to hold on the real numbers.  You may argue that
by asking for log of a negative number, the user was asking for it,
yet it is more likely than not that this came up by a programming
error, or perhaps roundoff error.  The option of computing
log (-1+0*i) (or perhaps clog(-1)), is naturally open.

I strongly suggest rational arithmetic 
be both canonical (2/4 converted to 1/2) and REQUIRE 1/0, -1/0 and 0/0.
Given that the gcd(x,0) is x, there is
almost no checking needed for these peculiar numbers, representing
+inf, -inf, and undefined.  Rules like 1/inf -> 0 fall through free.

The only "extra" check is that if the denominator of a sum turns
out to be 0, one has to figure out if the answer is 1/0, -1/0, or 0/0.

Similar ideas for +-inf, und, hold for IEEE-format numbers.

I have a set of programs which implement (in Franz) a common-lisp-like
i/o scheme, rational numbers, DEC-D flonums, integers, arbitrary-precision
floating point (macsyma "bigfloat"),
and  complex numbers (of any mixture of these, eg.  #c(3.0 1/2)). 
In the works is an interval arithmetic package, and a trap-handler.
There is also a compiler package in the works so that (+ ....) is
compiled with appropriate efficiency in the context of
appropriate declarations. 

I would be glad to share these programs with anyone who cares to
look at the stuff.

The important transcendental functions are implemented for real
arguments of flonum and bigfloat. 

Q: What did you have in mind for, for example, sqrt(rational)?
(what is the "required coercion"?)

∂18-Mar-82  0936	Don Morrison <Morrison at UTAH-20> 	Re: arithmetic
Date: 18 Mar 1982 1035-MST
From: Don Morrison <Morrison at UTAH-20>
Subject: Re: arithmetic
To: Kim.fateman at UCB-C70
cc: common-lisp at SU-AI
In-Reply-To: Your message of 17-Mar-82 1800-MST

Would it not make more sense to have 1/0, -1/0, and 0/0 print as something
which says infinity, -infinity, and undefined (e.g. #INF, #-INF, #UNDEF (I
know these aren't good choices, but you get the idea)).  There is still
nothing to prevent the imlementer from representing them internally as
1/0,-1/0, and 0/0 and having everything fall through nicely; readers and
printers just have to be a little cleverer.
-------

∂18-Mar-82  1137	MOON at SCRC-TENEX 	complex log    
Date: Thursday, 18 March 1982  14:23-EST
From: MOON at SCRC-TENEX
to:   common-lisp at su-ai
Subject: complex log

On issue 81 the November meeting voted for D.  I think the people at the
meeting didn't really understand the issues, and Fateman's message of
yesterday reinforces my belief that C is the only satisfactory choice.
This implies that complex numbers with 0 imaginary part don't normalize
to real numbers.  This is probably a good idea anyway, since complex
numbers are (usually) flonums, so zero isn't well-defined.  We don't
normalize flonums with 0 fraction part to integers.)

∂18-Mar-82  1432	CSVAX.fateman at Berkeley 	INF vs 1/0   
Date: 18 Mar 1982 14:05:30-PST
From: CSVAX.fateman at Berkeley
To: Morrison@UTAH-20
Subject: INF vs 1/0
Cc: common-lisp@su-ai



Basically, reading and writing these guys either way is no big deal.
There are representations of infinity in several floating point formats
(IEEE single, double, extended), which are printed as #[s INF] etc.
in the simple read/print package I have.  #[r INF] would be consistent,
though eliminating some of the syntax  (the CL manual does not have the
[] stuff) may make numeric type info hard to determine.  I do not like
to use unbounded lookahead scanners.  (Think about reading an atom which
looks like a 2000 digit bignum, but then turns out to be something else
on the 2001th character).


Undefined numeric objects  ("Not A Number") in the IEEE stuff, is much
stickier.  Presumably there is some information encoded in the number
that should be presented (e.g. how the object was produced.)

∂24-Mar-82  2102	Guy.Steele at CMU-10A 	T and NIL   
Date: 24 March 1982 2357-EST (Wednesday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  T and NIL


As nearly as I can tell, the arguments about changing NIL to () may be
divided into these categories (I realize that I may have omitted some
arguments--please don't deluge me with repetitions of things I have left
out here):

Aesthetics.
   Pro: NIL is ugly, especially as the empty list.
   Con: () is ugly, especially as logical falseness.

Convenience.
   Pro: Predicates such as SYMBOLP can usefully return the argument.
   Con:	If you change it then the empty list and false don't have property
	lists.

Compatibility.
   Con: Old code may be incompatible and may not be mechanically convertible.
	There is a large investment in old code.
		[I cannot resist noting here that the usual cycle of life
		continues: the radicals of 1975 are today's conservatives.]
   Pro: A small amount of anecdotal evidence indicates that old code that
	actually does depend on the empty list or falseness being a symbol
	has a bug lurking in it.

Inertia.
   Con: LISP has always used NIL, and people are used to it.
   Pro: It isn't difficult to get used to ().  Not only NIL has tried it;
	the Spice LISP project has used () for over a year and has found
	it quite comfortable.
   Con: Nevertheless, many people remain unconvinced of this, and this
	may serve as a significant barrier to getting people to try
	Common Lisp.

Implementation.
   Pro: In non-microcoded implementations, it is difficult to make
	CAR and CDR, SYMBOLP, and symbol-manipulating functions all
	be as efficient in compiled code as they might be if NIL and ()
	were distinct objects.

Ad hominem.
   [I will not dignify these arguments by repeating them here.]

Different people weigh these categories differently in importance.
I happen to lay great weight on aesthetics (the Pro side), convenience,
and implementation, and much less on compatibility and inertia.

Someone has also pointed out that the argument from implementation would
disappear if CAR and CDR of NIL were no longer permitted.  This strikes
me as quite perceptive and reasonable.  However, I am quite certain that
hundreds of *correct* programs now depend on this, as opposed to the
programs (whose very existence is still doubtful to me) that, correctly
or otherwise, depend on () being the symbol NIL.

Therefore I remain convinced that making the empty list *not* be a symbol
is technically and aesthetically the better choice.


HOWEVER, the primary purpose of Common LISP is not to be maximally
elegant, nor to be technically perfect, nor yet to be implementable with
maximal ease, although these are laudable aims and are important
secondary goals of Common LISP.

	The primary goal of Common LISP is to be Common.

If so trivial and stupid an issue as () versus NIL will defeat efforts to
achieve this primary goal; and, which is more important, if inertia and
unfamiliarity might prevent new implementors from adopting Common LISP;
then I must yield.  I speak for myself, the Spice LISP project, and the
new DEC-sponsored VAX Common LISP project: we will all yield on this issue
and endorse the traditional role of NIL as symbol, falseness, and empty
lists, for the sake of preserving the commonality of Common LISP.

Similar remarks apply to T and #T; for the sake of commonality, #T ought
not be a part of Common LISP (but neither should Common LISP usurp it).

This issue must be settled soon; many outside people think that because
we haven't settled this apparently fundamental matter therefore Common
LISP is nowhere close to convergence.  Moreover, *any* decision is better
than trying to straddle the fence.

In any event, something has to go into the next draft of the manual,
pending what I hope will be a final resolution of this issue at the next
face-to-face meeting.  Since every major project (with the possible
exception of Vax NIL?) is now willing to go along with the use of the
symbol NIL as the false value and empty-list and with the use of the
symbol T as the standard truth value, this seems to be the only
reasonable choice.

--Guy

∂29-Mar-82  1037	Guy.Steele at CMU-10A 	NIL and ()  
Date: 29 March 1982 1307-EST (Monday)
From: Guy.Steele at CMU-10A
To: McDermott at Yale
Subject:  NIL and ()
CC: COMMON-LISP at SU-AI

    Date:    29-Mar-82 0923-EST
    From:    Drew McDermott <Mcdermott at YALE>
    I agree with everything you said in your message to Rees (forwarded
    to me), especially the judgement that CARing and CDRing the empty
    list is more important than whether NIL is identical to it.  What
    I am wondering is how the voting has gone?  How heavy is the majority
    in favor of the old way of doing things?  Who are they?  It seems 
    a shame for them to be able to exploit the willingness to yield of
    those on the correct side of this issue.
    -------

Drew,
   First it must of course be admitted that "correctness" is here at least
partly a matter of judgement.  Given that, I can report on what I believe
to be the latest sentiments of various groups.  In favor of the
traditional ()=NIL are the LISP Machine community (except for RMS),
the Standard LISP folks at Utah, and the Rutgers crowd.  In favor of ()
and NIL being separate (but willing to yield) are Spice LISP at CMU, S-1 NIL,
and DEC's Common LISP project.  The VAX NIL project is in favor of
separating () and NIL, but I don't know whether they are willing to compromise,
as I have not yet heard from them.
--Guy

∂30-Mar-82  0109	George J. Carrette <GJC at MIT-MC> 	NIL and () in VAX NIL.  
Date: 30 March 1982 03:55-EST
From: George J. Carrette <GJC at MIT-MC>
Subject: NIL and () in VAX NIL.
To: Guy.Steele at CMU-10A
cc: McDermott at YALE, common-lisp at SU-AI

I would quote John Caldwell Calhoun (by the way, Yale class of 1804)
here, except that it could lead to unwanted associations with other
losing causes, so instead I'll labour the obvious. If the COMMON-LISP
manual is a winning and presentable document then the NIL and () issue
couldn't possibly cause VAX NIL to secede.


∂06-Apr-82  1337	The Technical Staff of Lawrence Livermore National Laboratory <CL at S1-A> 	T, NIL, ()    
Date: 06 Apr 1982 1021-PST
From: The Technical Staff of Lawrence Livermore National Laboratory <CL at S1-A>
Subject: T, NIL, ()
To:   common-lisp at SU-AI
Reply-To: rpg  

This is confirm that S-1 Lisp is in agreement with the statements
of Guy Steele on the subject of T, NIL, and (), and though it would be
nice to improve the clarity and elegance of Common Lisp, we will forego
such to remain common. It is unfortunate that Symbolics finds it impossible
to compromise, however, we find no problem with their technical position.

What is next on the agenda? Another meeting? More manual writing? Perhaps
Steele would like to farm out some writing to `volunteers'?

∂20-Apr-82  1457	RPG   via S1-A 	Test
To:   common-lisp at SU-AI  
This is a test of the Common Lisp mailing list.
			-rpg-

∂20-May-82  1316	FEINBERG at CMU-20C 	DOSTRING 
Date: 20 May 1982  16:12-EDT (Thursday)
From: FEINBERG at CMU-20C
To:   Common-Lisp at SU-AI
Subject: DOSTRING

Howdy!
	Dostring was a very useful iteration construct, and I request
that it be put back into the manual.  I know that there is dotimes,
but I am much more interested in the character of the string, not the
index into it.  It is very inefficient to keep on accessing the nth
character of a string, and a hassle to lambda bind it, when there was
such a perfect construct for dealing with this all before.  I realize
we can't keep all the type specific functions, but this one seems
especially useful.

∂02-Jun-82  1338	Guy.Steele at CMU-10A 	Keyword-style sequence functions
Date:  2 June 1982 1625-EDT (Wednesday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Keyword-style sequence functions

Folks,
  At the November meeting there was a commission to produce
three parallel chapters on the sequence functions.  I'm going nuts
trying to get them properly coordinated in the manual; it would
be a lot easier if I could just know which one is right and do it
that way.
  As I recall, there was a fair amount of sentiment in favor of
Fahlman's version of the keyword-oriented proposal, and no serious
objections.  As a quick summary, here is a three-way comparison
of the schemes as proposed:
	;; Cross product
(remove 4 '(1 2 4 1 3 4 5))			=> (1 2 1 3 5)
(remove 4 '(1 2 4 1 3 4 5) 1)			=> (1 2 1 3 4 5)
(remove-from-end 4 '(1 2 4 1 3 4 5) 1)		=> (1 2 4 1 3 5)
(rem #'> 3 '(1 2 4 1 3 4 5))			=> (4 3 4 5)
(rem-if #'oddp '(1 2 4 1 3 4 5))		=> (2 4 4)
(rem-from-end-if #'evenp '(1 2 4 1 3 4 5) 1)	=> (1 2 4 1 3 5)
	;; Functional
(remove 4 '(1 2 4 1 3 4 5))			=> (1 2 1 3 5)
(remove 4 '(1 2 4 1 3 4 5) 1)			=> (1 2 1 3 4 5)
(remove-from-end 4 '(1 2 4 1 3 4 5) 1)		=> (1 2 4 1 3 5)
((fremove #'< 3) '(1 2 4 1 3 4 5))		=> (4 3 4 5)
((fremove #'oddp) '(1 2 4 1 3 4 5))		=> (2 4 4)
((fremove-from-end #'evenp) '(1 2 4 1 3 4 5) 1)	=> (1 2 4 1 3 5)
	;; Keyword
(remove 4 '(1 2 4 1 3 4 5))			=> (1 2 1 3 5)
(remove 4 '(1 2 4 1 3 4 5) :count 1)		=> (1 2 1 3 4 5)
(remove 4 '(1 2 4 1 3 4 5) :count 1 :from-end t)=> (1 2 4 1 3 5)
(remove 3 '(1 2 4 1 3 4 5) :test #'>)		=> (4 3 4 5)
(remove-if #'oddp '(1 2 4 1 3 4 5))		=> (2 4 4)
(remove-if '(1 2 4 1 3 4 5) :count 1 :from-end t :test #'evenp)	=> (1 2 4 1 3 5)

Remember that, as a rule, for each basic operation the cross-product
version has ten variant functions ({equal,eql,eq,if,if-not}x{-,from-end}),
the functional version has four variants ({-,f}x{-,from-end}),
and the keyword version has three variants ({-,if,if-not}).

What I want to know is, is everyone willing to tentatively agree on
the keyword-style sequence functions?  If so, I can get the next version
out faster, with less work.

If anyone seriously and strongly objects, please let me know as soon
as possible.
--Guy

∂04-Jun-82  0022	MOON at SCRC-TENEX 	Keyword-style sequence functions   
Date: Friday, 4 June 1982  03:06-EDT
From: MOON at SCRC-TENEX
To:   Guy.Steele at CMU-10A
Cc:   common-lisp at SU-AI
Subject: Keyword-style sequence functions

I'll take the keyword-style ones, as long as this line of your message
    (remove-if '(1 2 4 1 3 4 5) :count 1 :from-end t :test #'evenp)	=> (1 2 4 1 3 5)
is really a typo for
    (remove-if #'evenp '(1 2 4 1 3 4 5) ':count 1 ':from-end t)	=> (1 2 4 1 3 5)

∂04-Jun-82  0942	Guy.Steele at CMU-10A 	Bug in message about sequence fns    
Date:  4 June 1982 1214-EDT (Friday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Bug in message about sequence fns
In-Reply-To:  Richard M. Stallman@MIT-AI's message of 3 Jun 82 23:45-EST

Thanks go to RMS for noticing a bug in my last message.  The last
example for the keyword-style functions should not be
(remove-if '(1 2 4 1 3 4 5) :count 1 :from-end t :test #'evenp)
but should instead be
(remove-if #'evenp '(1 2 4 1 3 4 5) :count 1 :from-end t)

I wasn't paying attention when I fixed another bug, resulting
in this bug.
--Guy

∂11-Jun-82  1933	Quux 	Proposed new FORMAT operator: ~U("units")   
Date: 11 June 1982 2233-EDT (Friday)
From: Quux
To: bug-lisp at MIT-AI, bug-lispm at MIT-AI, common-lisp at SU-AI
Subject:  Proposed new FORMAT operator: ~U("units")
Sender: Guy.Steele at CMU-10A
Reply-To: Guy.Steele at CMU-10A

Here's a krevitch that will really snork your flads.  ~U swallows
an argument, which should be a floating-point number (an integer or
ratio may be floated first).  The argument is then scaled by 10↑(3*K)
for some integer K, so that it lies in [1.0,1000.0).  If this
K is suitably small, then the scaled number is printed, then a space,
then a metric-system prefix.  If not, then the number is printed
in exponential notation, then a space.  With a :, prints the short prefix.
Examples:
 (FORMAT () "~Umeters, ~Uliters, ~:Um, ~:UHz" 50300.0 6.0 .013 1.0e7)
  =>  "50.5 kilometers, 6.0 liters, 13.0 mm, 10.0 MHz"

And you thought ~R was bad!

∂12-Jun-82  0819	Quux 	More on ~U (short) 
Date: 12 June 1982 1119-EDT (Saturday)
From: Quux
To: bug-lisp at MIT-AI, bug-lispm at MIT-AI, common-lisp at SU-AI
Subject:  More on ~U (short)
Sender: Guy.Steele at CMU-10A
Reply-To: Guy.Steele at CMU-10A

I forgot to mention that the @ flag should cause scaling by powers of 2↑10
instead of 10↑3:  (format () "~Ubits, ~:Ub, ~@Ubits, ~:@Ub" 65536 65536 65536 65536)
   =>  "65.536 kilobits, 65.536 Kb, 64.0 kilobits, 64.0 Kb"
--Q

∂18-Jun-82  1924	Guy.Steele at CMU-10A 	Suggested feature from EAK 
Date: 17 June 1982 1421-EDT (Thursday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Suggested feature from EAK


- - - - Begin forwarded message - - - -
Mail-From: ARPANET host MIT-MC received by CMU-10A at 16-Jun-82 21:08:16-EDT
Date: 16 June 1982 20:27-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject: Common Lisp feature
To: Guy Steele at CMU-10A

From experience trying to get things to work in several different
dialects (or just different operating systems), I think that it
is absolutely imperative that there be a simple way to load
packages (I don't mean the lispm sense) that you depend on, if
they're not already present.  Having to do this by hand with
eval-when, status feature, load, etc. etc. is very painful, very
error prone, and rarely portable (you usually at least have to
add additional conditionals for each new system).

How about
	(REQUIRE name)
which is (compile load eval) and by whatever means locally
appropriate, insures that the features specified by name are
present (probably by loading a fasl file from an implementation
specific directory if name isn't on a features list).  This may
want to be a macro so that name need not be quoted.

It's possible that REQUIRE could be extended to load different
things at compiled and load times (e.g. if you only need
declarations at compile time), but I don't care myself.
- - - - End forwarded message - - - -

∂18-Jun-82  2237	JonL at PARC-MAXC 	Re: Suggested feature from EAK 
Date: 18 Jun 1982 22:38 PDT
From: JonL at PARC-MAXC
Subject: Re: Suggested feature from EAK
In-reply-to: Guy.Steele's message of 17 June 1982 1421-EDT (Thursday)
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI

Certainly something like this is necessary.  (I must say that I'm impressed
with the facilities for doing such things in InterLisp --  DECLARE: likely
was the precursor of EVAL-WHEN.)   EAK's conception of REQUIRE
seems to be a step in the right direction, and a couple of relevant points
from past MacLisp experience are worth noting: 

  1)                 VERSION NUMBERING
        A few years ago when Bob Kerns and I hacked on this problem, we
     felt that the "requirement" should be for some specific, named feature,
     as opposed to the required loading of some file.  (EAK may have been
     in on those discussions back then).   True, most of our "requirements" 
     were for file loadings (its certainly easy to make a "feature" synonymous
     with the extent of some file of functions), but not all were like that.  
     There is a very fuzzy distinction between the MacLisp "features" list, 
     and the trick of putting a VERSION property on a (major component
     part of the) file name to indicate that the file is loaded.  
        But a typical "feature" our code often wanted was, say, "file
     EXTBAS loaded, with version number greater than <n>";  thus we'd make
     some dumped system, and then load in a file which may (or may not)
     require re-loading file EXTBAS in order to get a version greater than the 
     one resident in the dump.  Simple file loading doesn't fit that case.
        Xerox's RemoteProcedureCall protocol specifies a kind of "handshaking"
     between caller and callee as to both the program "name" and permissible
     version numbers.

  2)                FEATURE SETS 
          The facility that Kerns subsequently developed attempted to 
     "relativize" a set of features so that a cross-compiler could access the
     "features" in the object (target) environment without damaging those 
     in the (current) compilation environment.  (This was called SHARPC 
     on the MIT-MC NILCOM directory, since it was carefully integrated 
     with the "sharp-sign" reader macro).  I might add that "cross-compilation"
     doesn't mean only from one machine-type to another -- it's an appropriate
     scenario any time the object environment is expected to differ in some
     relevant way.   Software updating is such a case -- e.g. compiling with
     version 85 of "feature" <mumble>, for expected use in a  system with 
     version 86 of <mumble> loaded.   I believe there was a suggestion left
     outstanding from last fall that CommonLisp  adopt a feature set facility 
     like the one in the VAX/NIL (a slightly re-worked version of Kern's
     original one).

  3)               LOADCOMP
        Another trick from the InterLisp world: there are several varieties of
     "load" functions, two of which might be appropriate for EAK's suggestion.
      2a) LOAD is more or less the standard one which just gobbles down
          the whole file in the equivalent of a Read-Eval loop
      2b) LOADCOMP gobbles down only the things that the compiler would
          evaluate, if it were actually compiling the file;  the idea is to get
          macros etc that appear under (EVAL-WHEN (COMPILE ...) ...)
          Thus when a file is being compiled it can cause the declarations etc
          from another to be snarfed; in actual use, LOADCOMP can be (and is)
          called by functions which prepare some particular environment, 
          and not just by (EVAL-WHEN (COMPILE) ...) expressions in files.  
          [Since InterLisp file generally have a "file map" stored on them, it's
           possible to omit reading any of the top-level DEFUN's; thus this
           really isn't as slow as it at might first seem.]     

∂19-Jun-82  1230	David A. Moon <Moon at SCRC-TENEX at MIT-AI> 	Proposed new FORMAT operator: ~U("units")   
Date: Saturday, 19 June 1982, 15:08-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-AI>
Subject: Proposed new FORMAT operator: ~U("units")
To: Guy.Steele at CMU-10A
Cc: bug-lisp at MIT-AI, bug-lispm at MIT-AI, common-lisp at SU-AI
In-reply-to: The message of 11 Jun 82 22:33-EDT from Quux

Tilde yourself!  I think this is a little too specialized to go into FORMAT.

∂02-Jul-82  1005	Guy.Steele at CMU-10A 	SIGNUM function  
Date:  2 July 1982 1303-EDT (Friday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  SIGNUM function

    
    Date:  Wednesday, 30 June 1982, 17:47-EDT
    From:  Alan Bawden <Alan at SCRC-TENEX>
    Subject:  SIGNUM function in Common Lisp

    Someone just asked for a SIGN function in LispMachine Lisp.  It seems
    like an obvious enough omission in the language, so I started to
    implement it for him.  I noticed that Common Lisp specifies that this
    function should be called "SIGNUM".  Is there a good reason for this?
    Why not call it "SIGN" since that is what people are used to calling it
    (in the non-complex case at least)?

I called it "SIGNUM" because that is what most mathematicians call it.
See any good mathematical dictionary.  (Note, too, that the name of the
ACM special interest group on numerical mathematics is SIGNUM, a fine
inside joke.)  However, people in other areas (such as applied mathematics
and engineering) do call it "SIGN".  The standard abbreviation is SGN(X),
with SG(X) apparently a less preferred alternative.

As for programming-language tradition, here are some results:
*  PASCAL, ADA, SAIL, and MAD (?) have no sign-related function.
*  PL/I, BLISS, ALGOL 60, and ALGOL 68 call it "SIGN".
*  SIMSCRIPT II calls it "SIGN.F".
*  BASIC calls it SGN.
*  APL calls it "signum" in documentation, but in code the multiplication
   sign is used as a unary operator to denote it.  (Interestingly, such
   an operator was not defined in Iverson's original book, "A Programming
   Language", but he does note that the "sign function" can be defined
   as (x>0)-(x<0).  Recall that < and > are 0/1-valued.  I haven't tracked
   down exactly when it got introduced as a primitive, and how it came
   to be called "signum" in the APL community.)
*  FORTRAN has a function called SIGN, but it doesn't mean the sign
   function -- it means "transfer of sign".  SIGN(A,B) = A*sgn(B),
   but undefined if B=0.

I chose "SIGNUM" for Common LISP for compatibility with APL and mathematical
terminology, and also to prevent confusion with FORTRAN, whose SIGN function
takes two arguments.  I don't feel strongly about the name.  I observe,
however, that if the extension to complex numbers is retained, then
compatibility with APL, the only other language to make this useful
extension, may be in order.  (The signum function on complex numbers
is elsewhere also called the "unit" or "unit-vector" function for
obvious reasons.  It is called "unit" in Chris van Wyk's IDEAL language
for picture-drawing.)
--Guy

∂02-Jul-82  1738	MOON at SCRC-TENEX 	SIGN or SIGNUM 
Date: Friday, 2 July 1982  20:12-EDT
From: MOON at SCRC-TENEX
To:   common-lisp at sail
Subject:SIGN or SIGNUM

Seems to me the truly APL-compatible thing would be for SIGN
with one argument to be the APL unary x and with two arguments
to be the Fortran SIGN transfer function.

∂07-Jul-82  1339	Earl A. Killian            <Killian at MIT-MULTICS> 	combining sin and sind
Date:     7 July 1982 1332-pdt
From:     Earl A. Killian            <Killian at MIT-MULTICS>
Subject:  combining sin and sind
To:       Common-Lisp at SU-AI

Instead of having both sin and sind (arguments in radians and degrees)
respectively, how aobut defining sin as
          (defun sin (x &optional (y radians)) ...)
Where the second optional argument specifies the units in "cycles".
You'd use 2*pi for radians (the default), and 2*pi/360 for degrees.  To
get the simplicity of sind, you'd define the variable degrees to be
2*pi/360 and write (sin x degrees).

∂07-Jul-82  1406	Earl A. Killian            <Killian at MIT-MULTICS> 	user type names  
Date:     7 July 1982 1310-pdt
From:     Earl A. Killian            <Killian at MIT-MULTICS>
Subject:  user type names
To:       Common-Lisp at SU-AI

My very rough draft manual does not specify any way for a user to define
a type name in terms of the primitive types, which seems like a serious
omission.  Perhaps this has already been fixed?  If not, I propose
          (DEFTYPE name (args ...) expansion)
E.g. instead of building in unsigned-byte, you could do
          (deftype unsigned-byte (s) (integer 0 (- (expt 2 s) 1)))
The need for this should be obvious, even though it doesn't exist in
Lisp now.  Basically Common Lisp is going to force you to specify types
more often that older Lisps if you want efficiency, so you need a way of
abbreviating things for brevity, clarity, and maintainability.  I'd hate
to have to write
          (map #'+ (the (vector (integer 0 (- (expt 2 32) 1)) 64) x)
                   (the (vector (integer 0 (- (expt 2 32) 1)) 64) y))
I can barely find the actual vectors being used!

This also allows you define lots of the builtin types yourself, which
seems more elegant than singling out signed-byte as worthy of inclusion.
Also, it provides a facility that exists in languages such as Pascal.

Now, how would you implement deftypes?  A macro mechanism seems like the
appropriate thing.  E.g. when the interpreter or compiler finds a type
expression it can't grok, it would do
          (funcall (get (car expr) 'type) expr)
and use the returned frob as the type.

∂07-Jul-82  1444	Earl A. Killian            <Killian at MIT-MULTICS> 	trunc  
Date:     7 July 1982 1420-pdt
From:     Earl A. Killian            <Killian at MIT-MULTICS>
Subject:  trunc
To:       Common-Lisp at SU-AI

Warning: the definition of trunc in Common Lisp is not the same as an
integer divide instruction on most machines (except the S-1).  The
difference occurs when the divisor is negative.  For example, (trunc 5
-2) is defined to be the same as (trunc (/ 5 -2)) = (trunc -2.5) = -2,
whereas most machines divide such that the sign of the remainder is the
same as the sign of the dividend (aka numerator), which gives -3 for
5/-2.

Implementors should make sure that they do the appropriate testing
(ugh), unless someone wants to propose kludging the definition.

∂07-Jul-82  1753	Earl A. Killian <EAK at MIT-MC> 	combining sin and sind
Date: 7 July 1982 18:32-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject:  combining sin and sind
To: Common-Lisp at SU-AI

I meant 360, not 2*pi/360 in my previous message.

∂07-Jul-82  1945	Guy.Steele at CMU-10A 	Comment on HAULONG    
Date:  7 July 1982 2244-EDT (Wednesday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Comment on HAULONG

What do people think of the following suggested change?  I suspect
MacLISP HAULONG was defined as it was because internally it used
sign-magnitude representation.  EAK's suggestion is more appropriate
for two's-complement, and the LOGxxx functions implicitly assume
that as a model.

- - - - Begin forwarded message - - - -
Mail-From: ARPANET host MIT-Multics received by CMU-10A at 7-Jul-82 17:03:29-EDT
Date:     7 July 1982 1352-pdt
From:     Earl A. Killian            <Killian at MIT-MULTICS>
Subject:  haulong
To:       Guy Steele at CMUa

I think the definition in the manual for haulong:

ceiling(log2(abs(integer)+1))

is poor.  Better would be

if integer < 0 then ceiling(log2(-integer)) else ceiling(log2(integer+1))

I know of no non-conditional expression for this haulong (if you should
ever discover one, please let me know).  The only numbers that this
matters for are -2↑N.  Amusingly enough, I found this exact bug in the two
compilers I've worked on (i.e. they thought it took 9 bits instead of 8
to store a -256..255).
- - - - End forwarded message - - - -

∂07-Jul-82  1951	Guy.Steele at CMU-10A 	Re: trunc   
Date:  7 July 1982 2250-EDT (Wednesday)
From: Guy.Steele at CMU-10A
To: Earl A. Killian <Killian at MIT-MULTICS>
Subject:  Re: trunc
CC: common-lisp at SU-AI
In-Reply-To:  Earl A. Killian@MIT-MULTICS's message of 7 Jul 82 16:20-EST

No, EAK, I think there's a bug in your complaint.  Indeed most machines
divide so that sign of remainder equals sign of dividend.  So 5/-2 must
yield a remainder of 1, not -1.  To do that the quotient must be -2, not -3.
(Recall that dividend = quotient*divisor + remainder, so 5 = (-2)*(-2) + 1.)
So TRUNC does indeed match standard machine division.
--Guy

∂07-Jul-82  2020	Scott E. Fahlman <Fahlman at Cmu-20c> 	Comment on HAULONG   
Date: Wednesday, 7 July 1982  23:14-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Guy.Steele at CMU-10A
Cc:   common-lisp at SU-AI
Subject: Comment on HAULONG


EAK's suggestion for Haulong looks good to me.
-- Scott

∂08-Jul-82  1034	Guy.Steele at CMU-10A 	HAULONG
Date:  8 July 1982 1320-EDT (Thursday)
From: Guy.Steele at CMU-10A
To: David.Dill at CMU-10A
Subject:  HAULONG
CC: common-lisp at SU-AI

    Date:  8 July 1982 0038-EDT (Thursday)
    From: David.Dill at CMU-10A (L170DD60)
    
    Isn't this a dumb name?

Yes, it is -- but it's traditional, from MacLISP.  Maybe if its
definition is "fixed" then its name should be also?  (But I happen
to like it as it is.)
--Guy


∂08-Jul-82  1723	Earl A. Killian <EAK at MIT-MC> 	HAULONG
Date: 8 July 1982 20:24-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject:  HAULONG
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI

I think names like this really ought to be changed.  Obviously
you can't rename important functions for aesthetics, but for
obscure ones like this, a cleanup is in order.

integer-length?  precision?

Also, how about bit-count instead of count-bits?  It's less
imperative and more descriptive.

∂08-Jul-82  1749	Kim.fateman at Berkeley 	Re:  HAULONG   
Date: 8 Jul 1982 17:41:12-PDT
From: Kim.fateman at Berkeley
To: EAK@MIT-MC, Guy.Steele@CMU-10A
Subject: Re:  HAULONG
Cc: common-lisp@SU-AI

I would think ceillog2  (ceiling of base-2 logarithm)  would be a
good basis for a name, if that is, in fact, what it does.

You know the function in maclisp which pulls off the n high bits
(or -n low bits)  is called HAIPART...

∂09-Jul-82  1450	Guy.Steele at CMU-10A 	Meeting?    
Date:  9 July 1982 1748-EDT (Friday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Meeting?

[Sorry if this is a duplication, but an extra notice can't hurt,
especially if it is the only one!]

Inasmuch as lots of LISP people will be in Pittsburgh the week of
the LISP and AAAI conferences, it has been suggested that another
Common LISP meeting be held at C-MU on Saturday, August 22, 1982.
Preparatory to that I will strive mightily to get draft copies of
the Common LISP manual with all the latest revisions to people as
soon as possible, along with a summary of outstanding issues that
must be resolved.  Is this agreeable to everyone?  Please tell me
whether or not you expect to be able to attend.
--Thanks,
  Guy

∂09-Jul-82  2047	Scott E. Fahlman <Fahlman at Cmu-20c> 	Meeting?   
Date: Friday, 9 July 1982  23:39-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Guy.Steele at CMU-10A
Cc:   common-lisp at SU-AI
Subject: Meeting?


Guy,
My corporeal manifestion will be there.  My essence may well be
elsewhere.
-- Scott

∂18-Jul-82  1413	Daniel L. Weinreb <DLW at MIT-AI> 	combining sin and sind   
Date: Sunday, 18 July 1982, 17:06-EDT
From: Daniel L. Weinreb <DLW at MIT-AI>
Subject: combining sin and sind
To: Killian at MIT-MULTICS, Common-Lisp at SU-AI

One potential problem with your suggestion is that the "cycles" optional
argument seems to be being expressed in floating point, and because some
numbers cannot be expressed exactly in floating point, (sin x degrees)
might end up having some error that it would not have if sind were
explicitly defined.  I guess if (equal degrees 360) there is no problem
with degrees themselves but I'm still concerned about the general problem.

∂19-Jul-82  1249	Daniel L. Weinreb <DLW at MIT-AI> 	[REYNOLDS at RAND-AI: [Daniel L. Weinreb <DLW at MIT-AI>: combining sin and sind]]   
Date: Monday, 19 July 1982, 15:40-EDT
From: Daniel L. Weinreb <DLW at MIT-AI>
Subject: [REYNOLDS at RAND-AI: [Daniel L. Weinreb <DLW at MIT-AI>: combining sin and sind]]
To: common-lisp at su-ai

Date: 18 Jul 1982 1734-PDT
From: Craig W. Reynolds  <REYNOLDS at RAND-AI>
Subject: [Daniel L. Weinreb <DLW at MIT-AI>: combining sin and sind]
To: DLW at MIT-AI

This was the first message I got after being put on the common-lisp
redistribution list at Rand. If I understand the issue, a general fix
IS in order. In an attempt to be intuitive, my graphics system (ASAS)
uses "revolutions" to measure angles (1 rev = 360 degres = 2pi radians).
And of course, ASAS has its own oddly-named SIN and COS routines, SINE
and COSINE.
-c

∂19-Jul-82  1328	Earl A. Killian            <Killian at MIT-MULTICS> 	boole  
Date:     19 July 1982 1321-pdt
From:     Earl A. Killian            <Killian at MIT-MULTICS>
Subject:  boole
To:       Common-Lisp at SU-AI

The boole function currently takes exactly three arguments, instead of
an arbitrary number.  Making it take an arbitrary number by the
associating to the left would be wrong because the function is
non-associative.  However, there is fairly obvious definition that is
consistent: boole takes an operation code of 2↑N bits and operates on N
additional integers.  Thus (boole 2#01111111 a b c) is the same as
(logior a b c).

∂19-Jul-82  1515	Guy.Steele at CMU-10A 	Re: boole   
Date: 19 July 1982 1814-EDT (Monday)
From: Guy.Steele at CMU-10A
To: Earl A. Killian <Killian at MIT-MULTICS>
Subject:  Re: boole
CC: common-lisp at SU-AI
In-Reply-To:  Earl A. Killian@MIT-MULTICS's message of 19 Jul 82 15:21-EST

Not bad!  However, I would push for (boole #b11111110 a b c) = (logior a b c).
Then we would have the pretty pattern that
	(logbit (boole op x1 x2 x3 ... xn) j)
	= (logbit op #b<z1><z2><z3>...<zn>) where <zk> = (logbit xk j)

∂19-Jul-82  1951	Scott E. Fahlman <Fahlman at Cmu-20c> 	boole 
Date: Monday, 19 July 1982  22:48-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To: Guy.Steele at CMU-10A
Cc: common-lisp at SU-AI,  Earl A. Killian <Killian at MIT-MULTICS>
Subject: boole


You guys are kidding, right?  BOOLE is a hideous function that was only
kept around so that rasterop harware and suchlike could be used in a
direct way from within Lisp.  What possible purpose could served by
extending it to N values?  It is not a stated goal of Common Lisp to
provide an interpretation for all possible extensions to all possible
functions.

-- Scott

∂20-Jul-82  1632	JonL at PARC-MAXC 	Re: boole  
Date: 20 Jul 1982 16:28 PDT
From: JonL at PARC-MAXC
Subject: Re: boole
In-reply-to: Killian's message of 19 July 1982 1321-pdt
To: Earl A. Killian <Killian at MIT-MULTICS>
cc: Common-Lisp at SU-AI

The problem really is that BOOLE is a functional selecting among 16
moderatly random functions, rather than a simple function about which
one can talk of "consistent" extensions.  In fact, it would be a pain not
to have a simple n-argument LOGAND, LOGXOR, and LOGOR.  Probaably
the only reason for continuing existence of BOOLE is the lack of a
generally recognizable name for (BOOLE 4 . . . ).   Adoption of names like
BITCLEAR (presumably originating from the VAX operation of the same
name) is a step in the right direction.


∂20-Jul-82  1711	Earl A. Killian <EAK at MIT-MC> 	boole  
Date: 20 July 1982 20:13-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject:  boole
To: JonL at PARC-MAXC
cc: Common-Lisp at SU-AI

The usefulness of BOOLE is NOT because of the lack of a name
for (BOOLE 4 ...), but rather for the case where the 4 is an
expression.

∂20-Jul-82  1737	JonL at PARC-MAXC 	Re: Comment on HAULONG    
Date: 20 Jul 1982 17:33 PDT
From: JonL at PARC-MAXC
Subject: Re: Comment on HAULONG
In-reply-to: Guy.Steele's message of 7 July 1982 2244-EDT (Wednesday)
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI, Kaplan@PARC

I'm a little late in commenting on this, but before anything drastic is
done, perhaps the following should be considered:

HAULONG was clearly defined as  "computer" operation.  Attempts to put it 
on a mathematical footing apparently only make it more obscure.  It's intent 
is to count the number of "informational" bits in two's-complement number,
and it's encoding in MacLisp simply takes the magnitude first, before 
"counting" the bits.

Thus I agree with EAK that 
    ceiling(log2(abs(integer)+1))
is a poor definition for HAULONG, and my solution would be to abandon 
the mathematical-based definition altogether.   I think it would be even 
worse to give it a name which implied that it had some such simple 
mathematical property.

In general, as we discovered with the problem of printing out bitstrings
"in reverse order", there is a conflict with standard mathematical notation
for integers, and a computer users attempt to bitstrings as integers. 

HAULONG stands in the middle of this conflict.


∂21-Jul-82  0759	JonL at PARC-MAXC 	Re: boole, and the still pending name problem.
Date: 21 Jul 1982 07:58 PDT
From: JonL at PARC-MAXC
Subject: Re: boole, and the still pending name problem.
In-reply-to: EAK's message of 20 July 1982 20:13-EDT
To: Earl A. Killian <EAK at MIT-MC>
cc: Common-Lisp at SU-AI

I thought we had been through all this once, and were leaning in the
direction of treating a variable operation to BOOLE as an "arcane" case.
Fahlman's reply seems to imply this too.  

Apart from the objectionable nature of an incomprehensible argument 
(who wants remember that table from memory?), there is still pending 
the problem of a good name for (BOOLE 4 ...),  and possibly the 2 and 10
cases also.



∂23-Jul-82  1435	Earl A. Killian <EAK at MIT-MC> 	boole, and the still pending name problem.
Date: 21 July 1982 22:13-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject:  boole, and the still pending name problem.
To: JonL at PARC-MAXC
cc: Common-Lisp at SU-AI

The new draft manual has names for the other boolean functions.
Thus BOOLE is indeed useful only on rare occasions, but on those
occasions, what else would you use?

∂23-Jul-82  1436	MOON at SCRC-TENEX 	boole
Date: Thursday, 22 July 1982  03:13-EDT
From: MOON at SCRC-TENEX
To:   Scott E. Fahlman <Fahlman at Cmu-20c>
Cc:   common-lisp at SU-AI, Guy.Steele at CMU-10A,
      Earl A. Killian <Killian at MIT-MULTICS>
Subject: boole

    Date: Monday, 19 July 1982  22:48-EDT
    From: Scott E. Fahlman <Fahlman at Cmu-20c>
    To: Guy.Steele at CMU-10A
    Cc: common-lisp at SU-AI,  Earl A. Killian <Killian at MIT-MULTICS>
    Subject: boole

    You guys are kidding, right?  BOOLE is a hideous function that was only
    kept around so that rasterop harware and suchlike could be used in a
    direct way from within Lisp.  What possible purpose could served by
    extending it to N values?  It is not a stated goal of Common Lisp to
    provide an interpretation for all possible extensions to all possible
    functions.

BOOLE isn't being extended to N arguments (NOT values!).  It has always
taken N arguments (2 or more).  It's well-defined what this means.  I notice
there seems to have been a decision to limit it to 3 arguments in the
Common Lisp subset, which is acceptable if gratuitous.

∂23-Jul-82  2323	JonL at PARC-MAXC 	Re: boole, and the still pending name problem - Q & A   
Date: 23 Jul 1982 23:22 PDT
From: JonL at PARC-MAXC
Subject: Re: boole, and the still pending name problem - Q & A
In-reply-to: EAK's message of 21 July 1982 22:13-EDT
To: Earl A. Killian <EAK at MIT-MC>
cc: Common-Lisp at SU-AI

Q:  Thus BOOLE is indeed useful only on rare occasions, but on those
    occasions, what else would you use?

A: BITBLT

Which is probably what the *functional* BOOLE user wanted anyway.
Seriously, what are some examples that support the need for a *functional*
BOOLE?  (other than general backwards compatibility,  and as Moon points
out, the documented version doesn't even satisfy that.)




∂24-Jul-82  0118	Alan Bawden <ALAN at MIT-MC> 	Boole
Date: 24 July 1982 03:55-EDT
From: Alan Bawden <ALAN at MIT-MC>
Subject:  Boole
To: Common-Lisp at SU-AI

    Date: 23 Jul 1982 23:22 PDT
    From: JonL at PARC-MAXC
    Re:   boole, and the still pending name problem - Q & A

    Seriously, what are some examples that support the need for a *functional*
    BOOLE?  (other than general backwards compatibility,  and as Moon points
    out, the documented version doesn't even satisfy that.)

(Boole a (Boole b x y) (Boole c x y)) = (Boole (Boole a b c) x y)

This identity illustrates the fact that the first argument to Boole is more
than just an index into a table of operations.  It demonstrates how one might
assemble a Boolean operation from pieces at runtime.  I have done it.  In the
light of this I don't think that Boole is so obviously worthless.  

I suspect the fact that the definition of Boole in the draft manual only allows
three arguments is to avoid all the issues about just what multiple-argument
Boole is supposed to do.  Since you only want to call Boole in the case where
you don't know which Boolen operation you will be performing, and since in
general that means you don't know if the operation is associative or not, and
since NOBODY can remember which way Boole associates, I can't see that anyone
trying to write clear code would ever want to use anything other than
3-argument Boole.  (Note, please, that logand, logxor, etc. are all defined to
take any number of arguments.)

∂24-Jul-82  1437	Kim.fateman at Berkeley 	elementary functions
Date: 24 Jul 1982 14:28:39-PDT
From: Kim.fateman at Berkeley
To: common-lisp@su-ai
Subject: elementary functions

You might be interested in looking at the (new) HP-15C, which has
elementary, trig, hyperbolic functions, and their inverses, 
over the field of complex numbers.
I understand from Kahan, who specified that calculator,
that the definitions of the functions are somewhat different 
(wrt. branch cuts, etc) from those chosen by Penfield.

While people can naturally disagree on such
matters, I think it might be appropriate to consider this as
an alternative. 

I do not know if there is an implementation of Penfield's APL
stuff, nor how much usage, even in the presence of a good implementation,
would occur.  Users of HP calculators
have come to expect a certain elegance, consistency,
and attention to detail. I believe the 15-C provides this.

∂25-Jul-82  2141	Guy.Steele at CMU-10A 	Re: elementary functions   
Date: 26 July 1982 0040-EDT (Monday)
From: Guy.Steele at CMU-10A
To: Kim.fateman at UCB-C70
Subject:  Re: elementary functions
CC: common-lisp at SU-AI
In-Reply-To:  Kim.fateman@Berkeley's message of 24 Jul 82 16:28-EST

I would certainly like to consider alternatives for elementary functions.
Is there some published text describing Kahan's definitions, preferably
along with a rationale?  (One very important advantage of Penfield's
proposal is that the reasons for the choices are, right or wrong, clearly
stated.)
Penfield's proposal has been implemented, by the I.P. Sharp folks,
I believe.  I'll look up the reference when I'm in my office tomorrow.

Could you send me on-line a brief description of the differences, for
quick immediate evaluation?
--Thanks,
  Guy

∂26-Jul-82  0538	JonL at PARC-MAXC 	Re: Boole, and the value of pi 
Date: 26 Jul 1982 05:38 PDT
From: JonL at PARC-MAXC
Subject: Re: Boole, and the value of pi
In-reply-to: ALAN's message of 24 July 1982 03:55-EDT
To: Alan Bawden <ALAN at MIT-MC>
cc: Common-Lisp at SU-AI

I must say, the identity you pointed out
    (Boole a (Boole b x y) (Boole c x y)) = (Boole (Boole a b c) x y)
deserves at least to be labelled "Gosperian"!    I'll take it to heart instantly,
lest all my programs stop working, and the value of pi become rational.

But, seriously, I'm not sure how to react to your msg, especially in view 
of the qualification you added (with my emphasis):
    "It demonstrates how one **might** assemble a Boolean operation from 
     pieces at runtime. "

Despite the existence of LOGAND and LOGOR, old beliefs die hard -- 
there is just no good reason why BOOLE, if it must exist, has to be
argument limited on the trivial cases such as (BOOLE 7 x y z).  This sort
of thing exists all over the place in existing MacLisp code, and it has
always been defined to left-associate (which of course doesn't matter for
codes 1 and 7 which are not only associative, but also commutative).

But in the long run, for the vitality of CommonLisp, wouldn't it be better
to relagate hacks to BITBLT (which you didn't comment upon); a "functional"
argument to BITBLT is a necesity, and probably there are infinite numbers 
of odd facts which will eventually be derived from it.




∂26-Jul-82  1117	Daniel L. Weinreb <dlw at MIT-AI> 	Re: Boole 
Date: Monday, 26 July 1982, 14:07-EDT
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: Re: Boole
To: Common-Lisp at SU-AI

I would like to state for the record that either BOOLE should be
strictly limited to three arguments, or it should work as it does in
Maclisp (any number of arguments, left-associative).  It is unacceptable
for it to do anything other than these two things, on the grounds that
adding new arguments incompatibly with Maclisp cannot possibly be so
worthwhile that it is worth introducing the incompatibility.  As to
which of these two things it does, I'll be equally happy with either.

∂04-Aug-82  1557	Kim.fateman at Berkeley 	comments on the new manual    
Date: 4 Aug 1982 15:54:36-PDT
From: Kim.fateman at Berkeley
To: common-lisp@su-ai
Subject: comments on the new manual

p 119. The law of trichotomy does not hold for IEEE floating point
standard numbers.  Any language which imposes this as a language
feature cannot conform to the standard.   Better not to mention it.

∂04-Aug-82  1557	Kim.fateman at Berkeley  
Date: 4 Aug 1982 15:55:25-PDT
From: Kim.fateman at Berkeley
To: common-lisp@su-ai

I think forcing things to upper case is an unfortunate relic of the
type 35 tty.  Is there any company that makes an upper-case only
terminal (CDC,even?)
 But this has been flamed to a crisp previously.

∂04-Aug-82  1656	David A. Moon <MOON at SCRC-TENEX at MIT-MC> 	trichotomy    
Date:  4 Aug 1982 1939-EDT
From: David A. Moon <MOON at SCRC-TENEX at MIT-MC>
Subject: trichotomy
To: common-lisp at SU-AI

To clarify Fateman's remark, trichotomy does hold in the IEEE standard
for "normal" numbers.  No 2 of <,=,> can be true at the same time,
however it is possible for none of them to be true if the arguments
ar "unordered", i.e. one of the arguments is a not-a-number, or an
infinity in projective mode (where plus and minus infinity are the
same).   The IEEE standard further specifies that when the result
of a comparison is unordered, an Invalid Operand exception occurs.

Thus trichotomy only breaks down when the user disables trapping for
Invalid Operand exceptions.  Nevertheless, this seems worth a note
in the Common Lisp manual.

The answer when an unordered comparison is performed and the trap
is suppressed is that <,>,<=,>=, and = are all false.  It doesn't
say anything about /=, probably it's suppose to be true.  There is
also supposed to be an unorderedp predicate which returns true
when the arguments are unordered and does not cause an Invalid
Operand exception.
-------

∂04-Aug-82  1738	Kim.fateman at Berkeley 	Re:  trichotomy
Date: 4 Aug 1982 17:36:01-PDT
From: Kim.fateman at Berkeley
To: common-lisp@SU-AI
Subject: Re:  trichotomy

A suggested interpretation is that x<>y is TRUE only when x<y or x>y
(thus x and y are ordered and unequal), whereas x \= y means
NOT(x=y) and is never an invalid operation.

∂05-Aug-82  2210	Kim.fateman at Berkeley 	endp 
Date: 5 Aug 1982 22:05:28-PDT
From: Kim.fateman at Berkeley
To: common-lisp@su-ai
Subject: endp

Is the definition backwards? I would expect (endp nil) to be true.
So, I think, does the definition of list-length on the top of p 169.

∂08-Aug-82  1655	Scott E. Fahlman <Fahlman at Cmu-20c> 	Issues
Date: Sunday, 8 August 1982  19:54-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To: common-lisp at SU-AI
Cc: slisp: at CMU-20C
Subject: Issues


This message contains a number of comments on the recently-mailed Common
Lisp manual.  I also have a bunch of comments on the business of
Arrays, Vectors, and Fill-pointers that I will save for another message,
and a bunch of things relating only to the presentation itself that I
will send direct to Guy.  I am assuming that Guy plans to produce a
draft of the missing sections before the meeting, and that the list of
queries in the manual are automatically agenda items.  All page numbers
are keyed to the Colander Edition of 29 July.

Page 11: The statement that variable bindings are by default lexical and
indefinite is a very fundamental departure form past Lisps.  It makes
little trouble for a file-to-file compiler, but really impacts the
interpreter heavily, slowing it down by perhaps a factor of 2 whether
the indefinite-extent feature is used or not.  Some clever
special-casing may reduce this penalty at the price of considerable
added complexity.  I can live with this if everyone else can, but we
need to go into this with our eyes open, and not slip this into the
language in a single sentence.  I opposed this change until I realized
that even to make interpreter variables local by default is a messy
business, so we may as well go the whole way; the current situation, in
which compiled variables defautl to local and interpreted variables to
special, is clearly unacceptable.

Page 19: I oppose the use of an infix J to represent complex numbers.
Hard to read and non-lispy.  If users do a lot of complex hackery and
find #C(n1 n2) too hard to type, they can define a macro to make it
{n1 n2} or some such.

Page 25, first full par: Strings and bit-vectors are specializations of
vector, which in turn implies that they are arrays.  The rest of the
manual is consistent about this, but here it says that they are 1-D
arrays, which might or might not also be vectors.  The phrase "string
vector" is (or should be) redundant, not to mention hideous.

Page 33: I object to making bit-vectors self-evaluating forms, unless
ALL vectors are to be self-evaluating.  (Actually, I would make
everything except a symbol or a list evaluate to itself, but Guy wants
to make passing a general vector to EVAL an error, so that someday we
can define this to be something useful.)  Whatever we do about other
types, it seems really strange to make eval of a vector an error unless
that vector happens to hold 1-bit items.  Strings are different enough
that it doesn't bother me to make them self-eval.

Page 33: Note that in that same paragraph is another of those little
one-liners: keywords will eval to themselves.  The mechanism for this is
left up to the implementor -- it could be wired into eval and the
compiler, or it could just be a setq at make-symbol time.  Having seen
this both ways, I think it is a good idea, on balance.  It is really
ugly to have to type (foo ':key1 val1 ':key2 val2 ...).  And we
certainly don't want to be in the position of having to quote keywords
to EXPRs and not to macros and special forms -- very confusing.  The
price we pay for this is not being able to use the keywords as variable
names in keyword-taking functions, since we cannot assign values to them
-- hence the funny tapdance on page 38-39.

Page 42, query:  I oppose this alleged "safety feature".  It is always a
risk to redefine something, but macros are no worse than anything else.
I might go along with a query when the user redefines ANY built-in
function, as long as there is a switch to make this go away.

Page 46: I favor the suggestion that TYPEP of one argument be renamed to
something else, since it is not really a predicate.  Nothing with "%",
though -- this is a user-level function, though it should be used at
top-level and internally, and not in user-level code.

Page 51: I favor the suggestion that arrays of identical size whose
elements match should be EQUALP, regardless of the element-type of the
array.

Page 51, 120, 132, 133: Guy and I both now feel that the FUZZ argument
to EQUALP should be flushed, along with FUZZY= and FUZZINESS, which
exist only so that EQUALP can be defined in terms of them.  If EQUALP
gets two flonums of different types, they are coerced, compared, and
then must be exactly equal for EQUALP to hold.  Similarly, the hairy
tolerance arguments to MOD and REM must go.  The sentiment behind all of
this is that we don't want to put some sort of half-assed treatment of
precision and tolerance into the white pages.  For most uses this is
just confusing, and when you really want it, it is not good enough.  We
need a yellow pages module to do this right, with each number carrying
around a precision figure, perhaps in conjunction with a package for
infinite-precision flonums.

Page 67: Like lexical variables, including FLET, LABELS, and MACROLET in
the language slows down the interpreter even if these are not used.
Again, we need to explicitly consider whether this is worth the cost.

Page 81-83: Can we agree to rename these things so that the names are
consistent?  All of them should have one prefix, chosen from
"MULTIPLE-VALUE-", "MV-", or "MV".  I don't really care which we choose,
but we should not have a mixture.

Page 98: I like the IGNORE declaration better than naming variables
"IGNORE".  The OPTIMIZE declaration is nearly useless as it is -- there
have to be several levels of Speed vs. safety optimization in the VAX,
for instance.  If we can't come up with something better, we should
leave this kind of declaration up to the implementation and put nothing
into the white pages.

Page 102: There was some discussion earlier about the names for PUTPR
and friends, but Guy viewed this as being sufficiently inconclusive that
he went with the unfortunate decisions made at the November meeting.  I
propose that we retain GET and REMPROP, which had no order-of-arguments
problems anyway, and that we rename the proposed PUTPR to PUT.  The
latter will probably not be used much anyway, if people get used to
using SETF.

Page 117, query: I have no objection in principle to requiring support
for complex numbers, but only AFTER someone has delivered to all of us a
complete, portable set of number functions, that contains full support
for complexes.  This package must also be efficient -- complexes must
not slow down arithmetic when they are not being used -- and in the
public domain.  Guy plans to write such a thing (or oversee the writing)
someday, but until he delivers, complexes cannot go into the required
part of the language.  Some of us have to deliver full, legal
implementations of Common Lisp by certain dates, and we cannot count on
having this code ready by then.  I prefer having a glitch.

Page 130: Has anyone got a reasonable algorithm coded up for
rationalize?  If not, this function must be flushed from the white
pages.

Page 131: I like "ceiling" and "trunc" as names.  Just perverse, I
guess.

Page 132: As noted before, the tolerance arguments to MOD and REM must
go.  These slipped into the language when I wasn't looking.  I'm not
sure where these came from, but if I had seen them arrive I would have
violently opposed them at the time; instead, I will violently oppose
them now.

Page 138: I, too, would like to see a redefinition of HAULONG and
HAIPART to fit better wtih two's complement arithmetic.  This change
should be accompanied by a change of name, since it would be no great
loss to consign the old names to the Maclisp compatibility package.

Page 141: Should random be passed an optional state-object explicitly
instead of looking at a special variable?  Seems cleaner.

Page 155: I agree that, if there were no precedents, EQL is a better
default test for the Sequence and List functions than EQUAL.  Maclisp
blew this by giving the good names to the EQUAL versions of things.
However, I consider it totally unacceptable for Common lisp to redefine
heavily-used functions like MEMBER to use EQL.  If we go to EQL as the
default, it is imperative that we find a new name for the MEMBER-like
function, and probably also for the DELETE-like function, so that MEMBER
and DELETE still do the old thing using EQUAL.  What about MEM and DEL
for the new versions?  With MEMBER and DELETE defined as special cases of
those.  ASSOC can continue to use equal, since the genric form will be
FIND with a key extractor.

Page 185: I have gradually come around to the idea that SETF should
replace most of the user-level updating functions, especially given the
problems with argument order in ASET.  I would keep the traditional
things like SETQ and RPLACA around, but wouldn't mind flushing ASET,
VSET, SETELT, SETNTH, and the ever-popular RPLACHAR and RPLACBIT.

Page 188:  I think that allowing multi-D arrays to have fill pointers is
a terrible idea.  Worse than terrible.

Page 189: If ADJUST-ARRAY-SIZE is going to work on multi-D arrays, its
arguments must provide new values for each dimension.  Trying to parcel
out a single new-size parameter among all of the dimensions is really
hideous.  For 1-D arrays and vectors, I have no problem with the
proposed form, though the descriptions could be made a bit clearer.

Page 209: I would put Eval and frined into the control structures
chapter, and DESCRIBE and INSPECT into the chapter on semi-standard
stuff.

Page 213: We need a kind of stream that really passes the commands and
data to a user-supplied function of closure and another kind where the
user-supplied function gets the commands and supplies the data.
Probably the right way to do this is to pass the command (OUCH,
FLUSH-OUTPUT, or whatever) as the first arg to the function and the
evaluated args to that command as the &rest arg.  This is sort of flavor
like, but as long as we don't get into inheritance and mixing I have
objection to this.  That would give us enough rope to do all sorts of
weird I/O things.

Page 214: So, what about CHARPOS, LINENUM, and so on?  Build these in or
leave them to the user?  Clearly in some cases these things are
undefined.

Page 234: Arrays???  I would like arrays not to print their guts by
default, or at least to be subject to something like prinlevel and
prinlength, or maybe a special print-array-guts switch.

Page 242: OUT must not constrain the user to sending out only positive
integers.  I think it would be best to ship anything that is NUMBERP,
truncating quietly if necessary.  If the user wants to check, let him do
it.  Remember that binary I/O is used for all sorts of godawful hacks,
and should not be surrounded by lifeguards.

Page 252: Y-OR-N-P and YES-OR-NO-P should both be moved to the chapter
on semi-standard stuff, since they have portable interfaces but are free
to get the question answered in a system-dependent way.  We also need a
general menu selection function of the same sort, but FQUERY is not
right for this task and should be flushed or moved to the yellow pages.

Page 259: I would like to see a more coherent description of the need for
:unspecific, or else flush it.  I also think that the filename objects
should contain a slot for system-type (UNIX, ITS, TOPS-20...) rather
than trying to derive this from the name of the host.

Chapter 23: This is not as far from what I would like to see as Guy
suggests.  I would suggest we eliminate error-restart altogether, along
with the flags in CERROR, and then go with this.  Of course, we need to
define a lot of built-in handlers as well.

Page 279: All of the compiler stuff should go into the semi-standard
stuff chapter as well: here is how you call the compiler, but what it
does is totally up to the implementation.  The COMPILE function needs to
say something about how it interacts with any lexical environment that
was around when the EXPR was defined.

Last chapter: A chapter is planned here that will describe things that
can be called from portable Lisp code, but whose actions are whatever
the implementor thinks is "the right thing" for his system.  Included
are things like TRACE, INSPECT, PPRINT, COMPILE, functions to get the
time of day and the runtime, Y-OR-N-P, etc.  Some of these will be done
very differently on systems with multiple windows, or with time-of-day
clocks, etc., but the idea is to provide a standard interface for
whatever is available.

-- Scott

∂09-Aug-82  0111	MOON at SCRC-TENEX  
Date: Monday, 9 August 1982  00:36-EDT
From: MOON at SCRC-TENEX
To: Common-Lisp at SU-AI
In-reply-to: The message of Sunday, 8 August 1982  19:54-EDT from Fahlman at Cmu-20c

I guess some recipient of this list had better to take responsibility
to forward my message to whatever the hell "slisp: at CMU-20C" is.

I have a couple things to say about Scott's message, aside from Symbolics'
own comments which should get mailed to GLS today or tomorrow.

    Page 130: Has anyone got a reasonable algorithm coded up for
    rationalize?  If not, this function must be flushed from the white
    pages.

RATIONAL and the main part of RATIONALIZE have existed in the Lisp machine
for a long time.  I wouldn't know whether MIT considers these its property.
The algorithm seems reasonable although its implementation could be made
more efficient.

    Page 132: As noted before, the tolerance arguments to MOD and REM must
    go.

One of my comments is that I was mistaken in suggesting these; the operation
should be a separate function.

    Page 213: We need a kind of stream that really passes the commands and
    data to a user-supplied function of closure and another kind where the
    user-supplied function gets the commands and supplies the data.
    Probably the right way to do this is to pass the command (OUCH,
    FLUSH-OUTPUT, or whatever) as the first arg to the function and the
    evaluated args to that command as the &rest arg.  This is sort of flavor
    like, but as long as we don't get into inheritance and mixing I have
    objection to this.  That would give us enough rope to do all sorts of
    weird I/O things.

This is totally incomprehensible.  Could we have a clarification?

∂09-Aug-82  2029	Scott E. Fahlman <Fahlman at Cmu-20c>   
Date: Monday, 9 August 1982  23:28-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To: "MOON at SCRC-TENEX" at MIT-AI
Cc: Common-Lisp at SU-AI, Slisp: at CMU-20C


Dave,

The SLISP: address is a personal mailing list of Spice Lisp people at
CMU.  I am in the process of making this public so that mail to SLISP @
CMUC will work.  In the meantime, I will forward things.

Fateman just sent me a chunk of code from the depths of Macsyma
(believed to be a relic of the legendary RWG) from which I think we can
extract the necessary algorithm for RATIONALIZE, so strike that whole
comment.

You are right -- the paragraph I sent on function streams is
incomprehensible even to me.  Sorry, it's been a rough month.  The
attempted clarification follows:

It would be convenient for many purposes to have a type of output stream
that accepts characters or bytes but, instead of sending them off to a
file, passes the data to a user-supplied function, perhaps a closure.
Similarly, it would be useful to have a type of input stream that, when
asked for some input, calls a user-supplied function to obtain the data,
rather than sucking the characters or bytes in from a file.  This
mechanism could be used to implement such things as broadcast and string
streams, if these were not built in already.  Presumably there will be a
need for more such hacks in the future, and this mechanism gives us a nice
flexible hook.

What I propose is the following:

MAKE-FUNCTION-INPUT-STREAM fn				[function]
MAKE-FUNCTION-OUTPUT-STREAM fn				[function]
MAKE-FUNCTION-IO-STREAM fn				[function]

These functions create and return special stream objects that can be
used wherever regular input, output, and i/o stream objects are legal.
FN must be a function that accepts one required argument and a &rest
argument.  When some I/O operation is called on one of these streams,
the name of that operation (a symbol such as OUCH) is passed to FN as
the first argument, and all of that operation's arguments (evaluated)
are passed to FN as additional arguments.  Whatever FN returns is
returned by the OUCH (or whatever) as its value.

For example, if X is a function output stream whose associated function
is FX, and we do (OUCH #\a X), we end up calling FX with arguments OUCH,
#\a, and the value of X.  The FX function can then do whatever it wants
to with the #\a -- perhaps encrypt it and shove it into a string, or
play an "a" tone on the noisemaker, or whatever.  Clearly, the FX will do
a big dispatch on its first argument and then will process the other args
accordingly.  Whatever FX returns is the return value of the OUCH, tail
recursively.

The user might or might not want FX to handle all of the more esoteric
operations, such as FORCE-OUTPUT.  If FX recognizes FORCE-OUTPUT as its
first argument and does something useful, fine; if not, the big dispatch
will fall through and, by convention, an
:UNKNOWN-OPERATION-TO-FUNCTION-STREAM error will be signalled.
(We might want to give that a shorter name.)

Everyone who has seen this proposal has noticed that it is extremely
flavor-like.  I don't think we want to let flavors permeate the language
-- not yet, anyway -- but I don't object to the sort of message-passing
protocol used here.  It is the inheritance and flavor-mixing parts of
the flavor system that I don't trust, not the basic idea of active
objects and message-passing interfaces.

-- Scott

∂09-Aug-82  2220	Scott E. Fahlman <Fahlman at Cmu-20c> 	Arrays and Vectors   
Date: Tuesday, 10 August 1982  01:19-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To: common-lisp at SU-AI
Cc: slisp: at CMU-20C
Subject: Arrays and Vectors


OK, I am ready to cave in and allow fill pointers in ALL vectors
and all 1-D arrays.  (I think it is a serious mistake to allow fill
pointers in multi-D arrays.)  It is just too complex to allow fill
pointers in strings and not in other vectors, or to go to 1-D arrays
whenever you want the elasticity that a fill pointer can provide.

There is no added time cost for accessing a vector with a fill pointer,
as long as you were doing runtime bounds-checking anyway, and the space
cost is just one extra word per vector, so it's not a big deal.  Guy
currently holds the same point of view on this, and the new manual is
written this way, modulo some glitches.

I feel strongly that to make this coherent, we want the fill pointer to
be treated as the end of the vector or array for essentially all
purposes.  In particular, LENGTH means the length from 0 to the fill
pointer.  The ALLOCATED-LENGTH can be looked at, but it is normally only
meaningful when you want to grow the vector (i.e. move the fill
pointer).  The rest of the time, the space beyond the fill pointer is
inaccessible to built-in functions.  The Lisp Machine is very
inconsistent about all this, with LENGTH (meaning allocated length) used
in some places and ACTIVE-LENGTH in others -- this is presumably because
the fill pointers were grafted on as an afterthought.  Of course, for
most vectors most of the time, LENGTH and ALLOCATED-LENGTH will be the
same.

If vectors can have fill-pointers, we have many fewer uses for 1-D
arrays.  We would use these only in the odd cases where we want to make
some use of the indirection that the more complex array structure
provides, either for displacement or to allow arbitrary growth of the
array while preserving EQ-ness.  I think that having MAKE-VECTOR be a
separate function from MAKE-ARRAY is a very good thing; the previous
plan where you always called MAKE-ARRAY and sometimes got a vector was
very confusing.  In the new version of the manual VECTOR is still a
sub-type of ARRAY, and all of the ARRAY operation work on vectors
except, in some cases, those concerned with changing the array's size.

If you want vector accesses to be optimally efficient on something like
a VAX, you have to tell the compiler that you have a vector (or maybe
even a certain type of vector) and not a more general array.  This can be
done in any of several ways using the declaration system:

(VELT foo n)			; All equivalent in meaning and efficiency.
(ELT (THE VECTOR foo) n)
(AREF (THE VECTOR foo) n)

(VREF foo n)			; All equivalent in meaning and efficiency.
(ELT (THE (VECTOR T) foo) n)
(AREF (THE (VECTOR T) foo) n)

Instead of a THE construct, of course, you can declare the type of
variable FOO when it is bound.  If we go over to SETF as the master
changing form, we don't have to worry about ASET, VSET, VSETELT, etc.

The Lisp Machine people can just forget about these declarations, at
some small cost in efficiency if they ever move their code to a Perq or
a larger cost if they move to a Vax.

-- Scott

∂10-Aug-82  0003	MOON at SCRC-TENEX 	Your stream proposal
Date: Tuesday, 10 August 1982  02:58-EDT
From: MOON at SCRC-TENEX
To: Scott E. Fahlman <Fahlman at Cmu-20c>
Cc: Common-Lisp at SU-AI
Subject: Your stream proposal

It really is unfortunate that the system of stream flavors in the Lisp
machine didn't get documented in the manual (being too recent), or it
would probably convince you that the language is sadly deficient if
it doesn't have flavors.  For instance, in your proposal for user-defined
streams the user has to implement everything himself, which if Lisp machine
experience is any guide means that these streams will tend to be inefficient,
unreliable, and incompatible with standard streams in subtle ways, no matter
how experienced the user (or system programmer) is.
The family of stream flavors in the Lisp machine allows the user to interface
at various levels, providing character I/O or buffered block I/O with other
flavor components taking care of the rest.  Mixins exist to turn on a variety
of features that one might need in a stream.  This is of course almost a trivial
example of what can be done with flavors.

∂10-Aug-82  0549	JonL at PARC-MAXC 	Need for "active" objects, and your STREAM proposal.    
Date: 10 Aug 1982 05:50 PDT
From: JonL at PARC-MAXC
Subject: Need for "active" objects, and your STREAM proposal.
To: Scott E. Fahlman <Fahlman at Cmu-20c>
In Reply To: Your msg of Monday, 9 August 1982  23:28-EDT
cc: Common-Lisp at SU-AI

Again, risking being "out of step", I'll have to say that I'm a counterexample
to your conjecture in this note:
  "Everyone who has seen this proposal has noticed that it is extremely
   flavor-like. "
In fact, your proposal is a subset of MacLisp's SFAs.  A major deficit of
SFA's is that by having *no* inheritance mechanism,  it's quite difficult
to "get things correct" when you try have a SFA which more-or-less 
emulates some system capability, or when you to make a minor extension 
of one SFA definition.  Look at the source code for QUERIO to see how bad
it can be (try [MIT-MC]LSPSRC;QUERIO >).

On the other hand, I too share your skepticism about the need for
pervasiveness (or is it "perverseness") of flavors.  Two major thinking
points come to mind:

  1) The NIL design had a much more primitive notion for active "objects",
    under which both smalltalk-like CLASSES and flavors could be built.  
    In fact, the MacLisp/NIL system did just that (Dubuque built a flavor
    system over EXTENDs, while co-existing with the CLASS system).  This
    design permitted a default "inheritance" technique (which could trigger
    microcode on machines that have it) but didn't force you to take only one.
    Also, by having the "inheritance" technique vary on a per-class basis, even
    the ACTOR/Hewitt people could be satisfied with it.  

    Inheritance techniques are still under active debate and research (The 
    upcoming SmallTalk will probably have multiple inheritance even!), so it
    would be a bad idea for CommonLisp to standardize on one of the many
    alternative proposals right now.

  2) Just about anything doable with smalltalk-like classes is (easily) doable 
    with flavors; but the question arises "how many things are (easily) doable
    with flavors but *not* doable (reasonably) with lesser facilities?"  One's
    answer may depend on whether or not he views flavors as some kind of
    panacea.  I had always thought that a window system was flavor's strong
    case, but before making up your mind on this point, I suggest you see a
    demonstration of the non-flavor InterLisp-D window system at AAAI. 

    According to a bunch of non-Xerox linguists who've used both window
    systems, the user-sensible features of InterLisp-D were preferred; apparently
    the Xerox guys put more energy into developing interesting window ideas
    than in developing Yet-Another-Sort-of-Smalltalk.  This "opinion" comes to 
    me secondhand, from a linquistics conference recently, but the sources could
    be tracked down.  I don't think these linguists had an opinion on flavors 
    (likely they didn't even understand them),  but probably they were better 
    off for the lack of that opinion/knowledge;  after all, they only wanted to
    use the system, not implement it from scratch.

∂10-Aug-82  0826	Scott E. Fahlman <Fahlman at Cmu-20c> 	Function streams
Date: Tuesday, 10 August 1982  11:26-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To: common-lisp at SU-AI
Cc: slisp at CMU-20C
Subject: Function streams


OK, I believe that some sort of inheritance is going to be extremely
useful -- perhaps essential -- for these function-streams.  It may be
that flavors are the right thing, but it is pretty clear to me that we
are not ready to standardize on this, versus all the other inheritance
mechanisms that people have proposed.  Flavor-mixing looks wrong to me,
but maybe that's just my lack of experience with such things.  Maybe in
a year flavors will be my favorite language construct.

In any event, we don't want to clutter up Common Lisp with half-baked
stabs at object-oriented mechanisms that would get in the way of
more complete implementation-dependent mechanisms or that would be ugly
relics if Common Lisp ever standardizes on one sort of flavor-oid.
There seem to be two reasonable courses of action:

1. Forget about function streams as far as the white pages are
concerned.  Any implementation-specific or yellow-pages active-object
system can provide its own version of such streams, without having to
compete with the kludge I proposed.

2. If we can define a non-controversial interface to such streams,
document that in the white pages, but leave open the question of what
sort of inheritance is used to provide the actions.  Then we have a
standardized hook, without getting into the hairy issues.  Is my proposed
interface acceptable to all groups?  If not, are there specific
counter-proposals?  I am not fanatical about this interface -- anything
similar would do, and would be preferable if it fit neatly into flavors
or whatever.

-- Scott

∂11-Aug-82  0641	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-ML> 	Function streams
Date: Wednesday, 11 August 1982, 09:37-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-ML>
Subject: Function streams
To: Fahlman at Cmu-20c, common-lisp at SU-AI
Cc: slisp at CMU-20C
In-reply-to: The message of 10 Aug 82 11:26-EDT from Scott E. Fahlman <Fahlman at Cmu-20c>

Having had a great deal of experience with streams, the stream protocol,
and the issues of inheritance and sharing for streams, I can assure you
that without some kind of inheritance, it is very hard to create a
stream protocol that is at all satisying.  As a simple example, it is
nice to allow a FORCE-OUTPUT message to be handled by some streams, but
if someone wants to implement a simple stream with no buffering, he has
to explicitly implement FORCE-OUTPUT as a null method.  This makes it
intolerably difficult to write a very simple stream.

In the early days of the Lisp Machine, we had a function called
STREAM-DEFAULT-HANDLER that any stream could call with an unrecognized
message; the stream would pass ITSELF in as an argument, so that there
could be a STRING-OUT operation.  The default handler for the STRING-OUT
operation would just call the TYO operation for each character in the
string, but any stream that wanted to have hairy, efficient multi-char
output could handle STRING-OUT itself.  This mechanism was a very simple
form of inheritance, with only one superclass in the world, and the only
form of method-combination being shadowing.  This predates flavors or
even the earlier Lisp Machine class mechanism; the default handler was
just a function that stream functions all called when given an unknown
keyword.

This worked pretty well for us, for a while.  One problem was that all
stream functions had lambda-lists like (MESSAGE &OPTIONAL ARG1 ARG2 ...)
because the meanings of ARG1 and ARG2 depended on the value of the
message.  ARG1 and ARG2 are pretty unclear names.  The immediate
solution to this problem was the implementation of DEFSELECT (see the
Lisp Machine manual).  (Also, DEFSELECT produced functions that did the
big dispatch with a microcode assist, but that is not relevant to this
discussion.)

However, as we wrote more advanced I/O software, this mechanism soon
showed itself as being quite deficient.  Each stream that wanted to
support hairy multi-char output and buffered input and so on would have
to implement those concepts itself.  It turns out that it is not very
easy to write code that can accept an input buffer full of text and dish
it out in pieces of the right size.  This code got implemented many
different times by different people, in order to implement editor buffer
streams, file system streams, streams to DMA devices, network streams,
and so on.  Most of the implementations had subtle fencepost errors, and
many of them did not implement QUITE the same protocol regarding what to
do at the end-of-file (even though the protocol was completely
documented).

To deal with this problem, we came up with a set of flavors that
implemented all this stuff in one place: very efficiently, and without
bugs.  Now, if you want to write a stream that is capable of doing
STRING-OUT and LINE-IN and LINE-OUT operations, all you have to do is
create a new flavor, teach it how to handle a very small number of
messages (get next input buffer, advance input pointer, here is an
output buffer, etc.).  The flavors provide all of the actual
stream-protocol interfaces such as TYI and TYO and STRING-OUT and
LINE-IN and FORCE-OUTPUT, by sending the stream these few messages that
you provide.  Everything in the system was changed to use these flavors,
and a whole class of bugs finally vanished.  (This is what Moon was
referring to when he mentioned the STREAM flavors in his earlier
message.)

(By the way, I don't think that these flavors make much use of :BEFORE
and :AFTER daemons (except the ASCII-TRANSLATING-MIXINs, which have
daemons to translate between character sets), but they do use
non-hierarchical inheritance.  How anybody can get useful work done when
restricted to hierarchical inheritance is beyond me; the world just
doesn't work hierarchically.  But anyway.)

The point of all of this is that non-trivial message receiving is needed
if you really want to make a good I/O system.

Now, one thing to keep in mind about your proposal is that all it
discusses is a message sending protocol.  To wit, it says that messages
are sent to an object by applying that object to a symbol that specifies
the message name, and the rest of the arguments to the method.  It
doesn't say anything about how someone might receive such a message.  In
Common Lisp, usually you'd have to create a closure over a function that
has a big CASE-like construct on its first argument and had arguments
named ARG1 and ARG2 and so on; this is what we used to do many years ago
in the Lisp Machine, and it does work even though it is not as easy and
elegant as what we hvae now.  However, any particular Common Lisp
implementation could choose to provide more advanced message receiving
facilities, such as DEFSELECT, classes, or flavors.

In fact, this is the way that message-sending is currently defined to
work in the Lisp Machine.  We are planning to extend that definition
someday, so that it would be possible to send messages to primitive
objects (numbers, symbols) as well as to instances of flavors.  To that
end, we have stopped using FUNCALL to send messages and now use SEND.
For the time being, SEND just does FUNCALL, but someday it will be made
more clever and it will be able to deal with non-functions as its first
argument.

Because of this, if you put in your stream proposal, it would be somewhat
nicer for us if it were defined to call SEND to send the message, and
Common Lisp SEND were defined to be like FUNCALL when the argument is
a function and be undefined otherwise.

The other problem with the proposal is that we already have a very
similar mechanism but it uses different names for the operations.  In
particular, all of the operation names are keyword symbols.  Typical
symbols are :TYI, :UNTYI, :TYO, :STRING-OUT, :LINE-IN, :LINE-OUT,
;FORCE-OUTPUT, :CLOSE, :CLEAR-OUTPUT, :CLEAR-INPUT, and :TYI-NO-HANG.
It would be a shame if we had to implement two I/O systems, one for
Common Lisp and one for internal use.  It would be a lot of work to
completely change all of our message names, and it would not fit into
the rest of our system nicely if they were not keywords.  (Of course,
adding :INCH and :OUCH to our system is very easy, since we just add
them to one particular flavor and everybody automaticaly gets them.)

It might possibly be better to just forget about the whole thing as far
as the white pages are concerned, though.  I'm not sure that this
ability is really important for definition of portable software, and I
think it might be a lot of work to figure out how to design this in such
a way that we'll all be happy with it.

∂11-Aug-82  1914	Scott E. Fahlman <Fahlman at Cmu-20c> 	Function Streams
Date: Wednesday, 11 August 1982  22:13-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To: common-lisp at SU-AI
Cc: slisp at CMU-20C
Subject: Function Streams


I now believe that we ought to leave function streams out of the white
pages.  Implementations would then be free to add whatever is compatible
with their own version of active objects and message-passing.  Once we
have all had a reasonable chance to play with Smalltalk and with
flavors, then maybe we will be able to converge on some such mechanism
for Son of Common Lisp.  In the meantime, we probably want to steer
clear of interim or compromise solutions.

-- Scott

∂12-Aug-82  1402	Guy.Steele at CMU-10A 	Common LISP Meeting, etc.  
Date: 12 August 1982 1702-EDT (Thursday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Common LISP Meeting, etc.

I missed the window for mailing out corrected ERROR chapter
plus some other stuff.  For those of you who are pre-registered
for the LISP conference and have indicated that you will stay for the
meeting, you will find a copy in your registration packet.  Otherwise,
if you are coming to the meeting but are not preregistered for the
LISP conference or will not register, come to the registration desk
and say you're attending the Common LISP meeting and ask for a
Common LISP packet.  I will try to get an agenda for the meeting ready
to put with the ERROR chapter for you.
--Frantically,
  Guy

∂12-Aug-82  2002	Guy.Steele at CMU-10A 	Meeting - one more note    
Date: 12 August 1982 2302-EDT (Thursday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Meeting - one more note

Please bring your copies of the 29 July manual to the meeting with
you, if possible -- I don't have enough copies to go around redundantly.
We'll certainly need them for reference.
--Guy

∂13-Aug-82  1251	Eric Benson <BENSON at UTAH-20> 	Notes on 29 July manual    
Date: 13 Aug 1982 1348-MDT
From: Eric Benson <BENSON at UTAH-20>
Subject: Notes on 29 July manual
To: Common-Lisp at SU-AI

Just a few comments on the 29 July edition of the Common Lisp manual.
Some of these have already been mentioned.

p.6	SAMPLE-MACRO is shown returning the wrong value!
		(sample-macro x (+ x x)) => nil

p.10	The COMPOSE example appears to use evaluation of the CAR of a form,
	a la Scheme.  The intended action would seem to require the use of
	FUNCALL instead.  Specifically, I would assume that
		(defun compose (f g)
		  #'(lambda (x) (f (g x))))
	would use the global function definitions of F and G and ignore the
	parameters.  The intention I believe was for
		(defun compose (f g)
		  #'(lambda (x) (funcall f (funcall g x))))

p.18	I assume that "b" and "B" are reserved for the future addition of
	bigfloats.  Since Lisp code exists for bigfloats, why not
	just include them in the standard?

p.33	Numbers, strings and bit-vectors are self-evaluating.  Why not
	everything but symbols and conses?  If you want to reserve other
	types for possible future extensions to EVAL, why are bit-vectors
	special?  Certainly characters should fall into the same category.
	Why not all types except symbols, conses, (array t) and (vector t),
	since those are the only types which could ever be useful in EVAL.

p.37	Since a select-expression can be used wherever a lambda-expression
	is legal, you should include select-expression as a subset of
	lambda-expression for the purposes of documentation.  Otherwise you
	will have to look for every occurrence of "lambda-expression" and make
	sure it says "lambda-expression or select-expression".

p.44	Is the compiler allowed to substitute the constant value of a
	defconst variable?  If not, it should be made clear what the
	preferred way of implementing "manifest constants" is, i.e. #.foo
	or macros.

p.84	The rule about AND and OR passing multiple values only from the
	last form is strange.  I understand the implementational reason for
	it, but it's one of those rules that makes the language difficult
	for novices.  I would prefer to see multiple values simply
	disallowed in AND and OR.

p.102	The comment on the (non) usage of the property list by the
	interpreter is gratuitous.  It really belongs as a suggestion in
	the blue pages, not as user documentation in the white pages.  Of
	course it would be foolish to implement a Common Lisp interpreter
	using the property list to store name strings, values or functions,
	but it's no business of the language definition saying Common Lisp
	"doesn't".

p.168	The sense of ENDP is reversed, I believe.  It should be true of nil
	and false of conses.

p.216	It is rather unfortunate that the Common Lisp reader requires the
	implementation of multiple values for what used to be splice
	macros.  As far as I can tell, the only way to tell how many values
	are returned by a function is to make a list of them anyway (there
	really should be another way, perhaps a special form like
	MULTIPLE-VALUE which binds the number of values to one of the
	variables).  I would have thought having the read macro
	tail-recursively call READ would be equally good, and it would make
	bootstrapping much easier.  This appears to be the only place where
	multiple values would be necessary for a Common Lisp
	self-implementation.

-------

∂23-Aug-82  1326	STEELE at CMU-20C 	Results of the 21 August 1982 Common LISP Meeting  
Date: 23 Aug 1982 1612-EDT
From: STEELE at CMU-20C
Subject: Results of the 21 August 1982 Common LISP Meeting
To: common-lisp at SU-AI

The Common LISP Meeting started at 9:45 AM on 21 August 1982.
The following people attended the meeting: Guy Steele, Walter van Roggen,
Gary Brown, Bill van Melle, David Dill, Richard Greenblatt, Glenn Burke,
Kent Pitman, Dan Weinreb, David Moon, Howard Cannon, Gary Brooks, Dave Dyer,
Scott Fahlman, Jim Large, Skef Wholey, Jon L White, Rodney Brooks,
Dick Gabriel, John McCarthy, and Bill Scherlis.

One hundred and fifty agenda items had been previously prepared.
These were discussed and for the most part resolved.  Another eleven
items were brought up at the end of the meeting.  The meeting was
adjourned at 6:00 PM.

Following is a copy of the prepared agenda, annotated with the results
of the meeting, and the additional topics and their results.

--Guy


                     AGENDA FOR COMMON LISP MEETING
                             21 AUGUST 1982
               ANNOTATED WITH THE RESULTS OF THE MEETING

   1. What objects should be self-evaluating?  In particular,
      should a bit-vector self-evaluate, and should an (ARRAY (MOD
      4)) self-evaluate?  Suggestion:  everything should
      self-evaluate except for symbols, structures, and all objects
      other than numbers that have pointer components.  

          GLS will make a proposal.  Symbols and lists do not
          self-evaluate, numbers and strings do.  Odd objects
          should be an error.

   2. Should something be done about the fact that BYTE specifiers
      use a start-count (actually, a count-start) convention, while
      the rest of the language uses a start-end convention?  

          No.  This will be left as is.

   3. Shall keywords be self-evaluating, and kept in a separate
      package?  The motivation for the latter is that they will
      always be printed with a colon; for the former, that keyword
      argument names need not be written with a quote and a colon,
      but only a colon, which makes the call syntax consistent with
      that for macros.  

          Keywords are symbols, but are kept in their own
          package.  SYMEVAL of a keyword works, and returns
          that same keyword.  INTERN must arrange for the value
          of a keyword to be that keyword when that keyword is
          interned in the keyword package.

   4. Should the third (tolerance) argument to MOD and REMAINDER be
      eliminated?  

          Yes.

   5. Should all tolerance arguments be eliminated?  This would
      include elimination of FUZZY=.  

          Yes.

   6. Should arrays of identical size whose elements match be
      EQUALP, even if their element storage format is different?
      (Example:  can a bit-vector be EQUALP to an array of pointers
      that happens to contain only the integers 0 and 1?)  

          Yes, EQUALP descends into arrays and does this.  On
          the other hand, EQUAL descends essentially only into
          things that self-evaluate.  This will be clarified
          explicitly.

   7. While EQL is the best default test for the sequence
      functions, is it better to make MEMBER, DELETE, and ASSOC
      continue to use EQUAL for backward compatibility?  Perhaps
      alternative names for the general, EQL-defaulting case can be
      found?  

          EQL will be used for everything.  Consistency is more
          important than compatibility here.

   8. Scope and extent of GO and RETURN.  Can one GO out from an
      argument being evaluated?  Can a GO break up special variable
      bindings?  Can a GO break up a CATCH?  Does GO work despite
      funargs?  

          Yes to all of these.  The tags established by a PROG
          have dynamic extent and lexical scope; the same goes
          for RETURN points.  Compilers are expected to
          distinguish the obvious easy cases from the hard
          ones.

   9. Forbid multi-dimensional arrays to have fill pointers?  

          Yes.

  10. Should updating functions be eliminated in the white pages in
      favor of consistent use of SETF?  

          Keep RPLACA, RPLACD, and SETQ out of respect for
          tradition.  All the others can go.  This solves the
          controversy surrounding PUTPROP, as well as the
          difficulty with ASET and VSET differing in their
          argument order.

  11. Is the use of * in type specifiers satisfactory to indicate
      missing elements?  (RMS suggested use of NIL, but there is a
      problem: (ARRAY INTEGER ()) means a 0-dimensional array of
      integers, while (ARRAY INTEGER *) means any array of
      integers.)  

          Yes.

  12. Should &KEY be allowed in DEFMACRO?  

          Yes.

  13. Should complex numbers be required of every COMMON LISP
      implementation?  

          There was discussion to the effect that the precise
          definitions of the branch cuts is still in a sate of
          flux.  There are minor differences between the APL
          proposal and the proposal by Kahan that encompasses
          proposed IEEE floating-point format.  Keep complex
          numbers in the manual, with a note that the precise
          definitions are subject to change and are expected to
          be tied down before January 1, 1984, at which time
          they will be required of all COMMON LISP
          implementations.

  14. Is the scheme outlined for DEFSTRUCT, wherein constructor
      macros can actually be functions because of a conventional
      use of keywords, acceptable?  

          Yes.  Also note that the various functions may be
          automatically declared INLINE at the discretion of
          the implementation.

  15. Why should the DEFSTRUCT default type be anything in
      particular?  Let it be whatever is implementationally best,
      and don't mention it.  

          Simply note that TYPEP of two arguments must work
          properly on structures in the default case.  For
          example, after SHIP is defined, and X is a ship, then
          (TYPEP X 'SHIP) must be true, and (TYPEP X 'ARRAY)
          may or may not be true.

  16. Proposed to flush DEFSTRUCT alterant macros, advising the
      user always to use SETF.  

          Yes.  However, there must also be a way to provide
          one's own SETF methods via DEFSETF.

  17. Can we standardize on keywords always being used as
      name-value pairs?  The worst current deviants are
      WITH-OPEN-FILE and DEFSTRUCT options.  

          Yes.  The Lisp Machine LISP group will make a
          proposal soon for OPEN, WITH-OPEN-FILE, and
          DEFSTRUCT.

  18. What types may/must be supertypes of others?  What types may
      overlap, and which must be mutually exclusive?  An explicit
      type tree is needed.  Examples: can bignums be vectors?  Are
      BIT-VECTOR and (ARRAY (MOD 2)) identical, disjoint, or
      overlapping?  

          Let ``>'' means ``is a supertype of''.  Let ``#''
          mean n-ary ``is disjoint from''; ``A#B#C#D'' means
          that A, B, C, and D are pairwise disjoint.  Let ``#''
          have higher precedence that ``>'', so that
          ``Z>A#B#C'' means that A, B, and C are pairwise
          disjoint subtypes of Z.  Then:

              t > common > cons # symbol # array # number
                           # character
              number > rational # float # complex
              rational > integer # ratio
              integer > fixnum # bignum
              float > short-float
              float > single-float
              float > double-float
              float > long-float
              character > string-char
              array > quickarray/vector

          The four subtypes of FLOAT are such that any pair are
          either identical or disjoint.  The type COMMON
          encompasses all data types defined in the white
          pages; perhaps its definition should be made
          carefully so that, for example, an implementation can
          introduce new kinds of numbers without putting them
          under COMMON.  So, perhaps COMMON should instead be
          defined to be

              COMMON > cons # symbol # array # rational
                 # short-float # single-float # double-float
                 # long-float # standard-char # hash-table
                 # readtable # package # pathname # stream

          Anyway, you get the idea.

  19. Are there type names (for Table 4-1) such as RATIONAL,
      FUNCTION, STRUCTURE, PACKAGE, HASH-TABLE, READTABLE,
      PATHNAME, and so on?  

          Yes, except for STRUCTURE.  Also, eliminate
          STRUCTUREP.

  20. How can one ask, ``Is this an array capable of holding
      short-floats?''?  

          Invent a function that extracts this information from
          an array.

  21. Rename DEFCONST to avoid conflict with Lisp Machine LISP.  (I
      think the November meeting actually voted for DEFVALUE, but I
      forgot to edit it in.  --GLS) DEFCONSTANT would do.  What
      Lisp Machine LISP calls DEFCONST should go into COMMON LISP
      as DEFPARAMETER or something.  

          Rename what the COMMON LISP manual now calls DEFCONST
          to be DEFCONSTANT.  Introduce a new construct
          DEFPARAMETER to do what Lisp Machine LISP DEFCONST
          does.  Retain DEFVAR with that name.

  22. Should one allow both required and optional keyword
      parameters?  

          No; have only optional keyword parameters.  However,
          permit the use of ordinary &OPTIONAL parameters and
          &KEY parameters together, withg the former preceding
          the latter.

  23. Should &REST be allowed with &KEY after all?  How about
      &ALLOW-OTHER-KEYS?  (It turns out these features are useful
      for gathering up all your keyword arguments and letting some
      sub-function inherit them.)  

          Yes.  Also, state that if a key is duplicated as an
          argument the leftmost one prevails and the others are
          ignored (a GET-like model).

  24. Should EQUAL be allowed to descend into arrays?  Vectors?  

          No and yes.  See issue 6.

  25. Proposed not to attempt to standardize on a package system
      for COMMON LISP at this time.  Reserve the colon character
      for the purpose, and explain a bit about how to use it, but
      don't tie down all package details.  

          Agreed.  State that there are packages, and that they
          are used as arguments to certain functions such as
          INTERN.  State explicitly that inheritance
          properties, if any, are undefined.  State that all
          uses of colon except for keyword syntax are reserved,
          but mention the general intent of the notation
          FOO:BAR.

  26. Various problems with pathnames: generic pathnames,
      interning, :UNSPECIFIC, logical pathnames.  

          GLS and SEF will propose a very stripped-down subset
          of what is presently in the COMMON LISP manual.  RG
          and Symbolics may also make proposals.

  27. What is the maximum array rank?  Is it implementation-
      dependent?  If so, what is the minimum maximum?  

          The minimum maximum shall be 63.

  28. What special forms may be documented to be macros, in order
      to minimize the number of special forms?  

          There is agreement so to minimize the number of
          special forms.  GLS will make a proposal.

  29. Should a way be provided for the user to define functional
      streams?  If not, should DEFSELECT and SELECT-expressions
      remain in the language?  

          COMMON LISP will, for now, not specify a way to
          create functional streams.  DEFSELECT and select-
          expressions shall be deleted.  HIC will propose
          within two weeks a simplified instance system similar
          to the ones in MACLISP and NIL intended to support
          object-oriented programming while remaining neutral
          with respect to inheritance issues, to encourage
          experimentation.

  30. Are COMMON LISP arrays stored in row-major order?  If not,
      what is the interaction with displaced arrays?  

          Row-major order shall be used.

  31. It is silly to have two of every I/O function, one for
      integers and one for characters.  Flush one set (TYI, TYO,
      TYIPEEK, ...) and leave the other (INCH, OUCH, INCHPEEK,
      ...).  

          Agreed to eliminate the TYI series and to rename the
          others to eliminate the ``cute'' names.  The new
          names will be:

              Old name            New name
              inch                read-char
              ouch                write-char
              in                  read-byte
              out                 write-byte
              inchpeek            peek-char
              (none)              peek-byte

  32. Should some value be reserved for eof-value to mean the same
      as not supplying any?  

          All occurrences of an eof-value parameter shall be
          replaced by two &OPTIONAL parameters eof-errorp
          (defaulting to true) and eof-value (defaulting to
          NIL, and meaningful only if eof-errorp is false).

  33. FORCE-OUTPUT should not wait, but just initiate I/O.  There
      should be a FINISH-OUTPUT (which implies FORCE-OUTPUT) that
      does wait.  

          Yes.

  34. Should all vectors have fill pointers?  If so, should nearly
      all functions consistently use the active-length?  

          An array may or may not have a fill pointer.  Vectors
          (``quickarrays'') shall be defined as a subset of
          arrays in such a way that they have no fill pointers.
          All functions that use arrays will use the fill
          pointer to bounds-check and limit access, except AREF
          (and SETF thereof).  Array slots beyond the fill
          pointer are still alive and may not be gratuitously
          garbage-collected or otherwise destroyed.

  35. Should HAULONG be changed from its current definition
      ceiling(log (abs(integer)+1)) to the one proposed by EAK, namely
                 2

      if integer<0 then ceiling(log (-integer)) else ceiling(log (integer+1))
                                   2                            2

      ?  With either definition, a non-negative integer n can be
      represented in an unsigned byte of (HAULONG n) bits.  With
      EAK's definition, it is also true that an integer n (positive
      or negative) can be represented in a signed byte of (+ (HAULONG N) 1)
      bits.  

          Adopt the EAK definition under the name integer-
          length.

  36. Should HAULONG and HAIPART be given more reasonable names?  

          Eliminate both HAULONG and HAIPART.  Put
          compatibility notes under INTEGER-LENGTH and LDB.

  37. How should the IEEE proposed floating-point standard be
      accommodated?  In particular, what about the trichotomy of <,
      =, and >, and how might underflow/infinity/NAN values be
      handled?  

          Agreed that COMMON LISP must permit the use of IEEE
          proposed floating-point format and operations.
          Statements about such matters as trichotomy must be
          carefully worded.  Suggestions will be solicited from
          RJF concerning such accommodation.

  38. If PROGV has more variables than values, proposed that the
      extra variables be initialized to ``unbound'' rather than
      NIL.  This is already done in MACLISP.  

          Yes.  (It was suggested that a similar thing be done
          with LET and PROG: (LET ((A ()) B) ...) makes A bound
          to NIL, but B unbound (that is, an error to refer to
          it before it is set).  My notes are unclear as to
          whether this was agreed to.)

  39. Shall EXCHF be generalized to take n locations and
      left-rotate them?  Shall SWAPF be correspondingly generalized
      to take N locations and a value and left-shift them?  (Suzuki
      indicates that these are useful primitives and lead to more
      understandable code in some cases.)  

          PSETF, a parallel SETF shall be introduced; so shall
          a multiple-pair sequential SETF.  GLS is to propose
          Suzuki-type primitives with better names that EXCHF
          and SWAPF.

  40. Shall TYPEP of one argument be renamed to something else, or
      eliminated?  

          Rename it to be TYPE-OF.  Note that while the results
          are implementation-dependent, they can be used in
          portable code, for example, by handing the result,
          without further examination, to MAP or CONCATENATE.

  41. In pathnames, shall names and types also be allowed to be
      structured?  

          Deferred.  See issue 26.

  42. Should the interpreter be required or encouraged to check
      type declarations?  At binding time?  At SETQ time?  

          Encouraged but not required.  That is, violation of
          type declarations ``is an error'' but does not
          ``signal an error''.

  43. How should complex numbers be notated?  With #C syntax?
      Using an infix J?  In the form x+yI?  Examples: #C(0 1),
      #C(3.5E-7 -15/3); 0J1, 3.5E-7J-15/3; 1I, 3.5E-7-15/3I.  

          Retain #C syntax.

  44. How about using :OPTIONAL instead of &OPTIONAL, and so on?  

          No; retain &OPTIONAL.

  45. Rename CEIL and TRUNC to CEILING and TRUNCATE?  

          Yes.

  46. Should minimum precision and exponent range be specified for
      each of the floating-point formats?  

          Yes, but the table in the COMMON LISP manual does not
          accommodate Lisp Machine LISP short-floats and S-1
          LISP short-floats.  GLS will fix this.

  47. Should it be specified that if an implementation provides
      IEEE proposed floating-point format, that the single and
      double formats shall in fact be the IEEE single and double
      formats?  

          Yes; however, carefully word it, because the IEEE
          proposed standard permits implementation of single-
          precision format without implementing double-
          precision format.

  48. Revise the format for vectors, structures, and so on, all to
      use #(...) syntax.  Boxed vectors can be #(V ...), arrays can
      be #(ARRAY ...), complex numbers #(C 0 1), and structures
      #(SHIP ...).  RMS says this will be easier for EMACS-like
      editors to parse.  

          No; this was judged harder for people to read.  Let
          the editors be fixed.

  49. Are variables truly to be lexically, rather than locally,
      scoped?  

          Yes.

  50. Are FLET, LABELS, amd MACROLET worth keeping?  

          Yes.

  51. Rename multiple-value constructs to all have the same prefix,
      either MV, MV-, or MULTIPLE-VALUE-, consistently.  

          The prefix ``MULTIPLE-VALUE-'' will be used
          consistently.

  52. Proposed to keep the names GET and REMPROP, and rename
      PUTPROP to PUT (revising argument order), rather than current
      names GETPR, PUTPR, and REMPR.  

          The names GET and REMPROP will be used.  PUTPROP is
          eliminated in favor of SETF of GET.

  53. The definition of ADJUST-ARRAY-SIZE on multi-dimensional
      arrays seems to follow from implementation considerations,
      rather than because it is useful to a user.  ARRAY-GROW is
      more user-useful.  Flush ADJUST-ARRAY-SIZE?  

          MOON has a proposal that a function of this kind
          should take keywords much as MAKE-ARRAY does, but
          also take an ``old array'' that is made to conform to
          the new specifications.  GLS will make a concrete
          proposal.

  54. Should things like CHARPOS and LINENUM be built in?  

          No, eliminate them.

  55. Flush MAKE-IO-STREAM and MAKE-ECHO-STREAM; they are too
      simplistic and can't really handle the tight I/O coupling.  

          Keep them, but find a better name for MAKE-IO-STREAM.
          They may be useful in simple cases.

  56. Should OUT be able to output other than positive integers?
      What about ways of putting other objects, such as
      floating-point numbers, out to binary files?  

          No; it ``is an error'' for the argument to OUT (now
          named WRITE-BYTE) to be other than a positive integer
          that can fit into bytes of the size suitable to the
          stream.  GLS will make a proposal for other
          primitives to write floating-point numbers and other
          objects to binary files.

  57. Flush FQUERY, on the grounds that it is too hairy for simple
      things and not hairy enough for a general menu interface, for
      example?  

          Yes, flush it.

  58. Should the INTERLISP SATISFIES type specifier be introduced?
      If so, perhaps it should take a function, not a form, as
      there are problems with when to evaluate the form.  

          Yes.

  59. What about the proposed safety feature to prevent macros and
      special forms from being redefined as functions (signal a
      correctable error)?  

          Eliminate this ``feature''.

  60. Proposed: (RESTART [block-name)] is essentially a jump to the
      top of the named block, a somewhat smaller sledgehammer than
      a full PROG with a GO.  

          This appears to be a good idea; however, a more
          detailed proposal is required to clarify such points
          as whether a RESTART within a PROG unbinds and
          rebinds the variables.

  61. Proposed: (CYCLE . body) is like a BLOCK with no name and an
      implicit RESTART at the end.  This is a new name for
      DO-FOREVER.  Certain loops that are clumsy to do with DO are
      lots easier with this, without a full PROG.  

          Yes, but call it LOOP, as this is a special case of
          the forthcoming LOOP proposal.

  62. If a GO or RETURN is permitted to pass out of a CATCH-ALL or
      UNWIND-ALL, what arguments does the catcher function receive?
      

          Eliminate the current CATCH-ALL and UNWIND-ALL.
          Introduce a new function CATCH-ALL that is just like
          CATCH but takes no tag and catches any attempt
          whatsoever to unwind.  The return values for a GO and
          other odd cases must be defined.

  63. Is the IGNORE declaration acceptable?  

          Yes.

  64. Is the OPTIMIZE declaration acceptable?  

          It is a good idea but inadequate as currently
          defined.  SEF will make a new proposal.

  65. How should symbols that belong to no package be printed?  A
      suggestion: ``#:pname''.  The reader should read this as a
      non-uniquized symbol.  If the same gensym appears several
      times in the same printed expression, that can be handled by
      the #= and ## syntax.  

          Reserve this syntax pending a proper package
          proposal.

  66. Proposed to retract GCD of complex rationals, and restrict
      GCD to integer arguments.  

          Yes.

  67. Is PUSHNEW as a predicate useful, or should the simpler
      definition be adopted?  

          Use the simpler definition.

  68. PUSHNEW should take the :TEST keyword.  

          Yes.

  69. Should the :TEST-NOT keyword be flushed for ADJOIN, UNION,
      INTERSECTION, SETDIFFERENCE, and SET-EXCLUSIVE-OR?  

          Yes.

  70. What about pervasive package syntax?  Is ``SI:(A B C)''
      legal?  

          Yes.  Actually, this syntax is reserved for whatever
          the package system looks like.

  71. For circular list syntax, should it be required that a
      reference #n# not occur before the #n= that defines it?  

          Yes.  This restriction may be relaxed in the future.

  72. Rename the type STRING-CHAR to be STRING-CHARACTER.  

          No.

  73. Rename the type RANDOM; that is an abuse of the word RANDOM.
      Suyggestion: INTERNAL.  It is not necessarily used at all; it
      is just a catch-all under which to put odd implementation-
      dependent things such as pointers into the stack or absolute
      memory.  

          Eliminate the type RANDOM.  Introduce the new type
          COMMON (see issue 18).  Mention that there may be
          implementation-dependent data types.

  74. There needs to be a kind of BLOCK to which RETURN is
      oblivious.  It is primarily useful for macro definitions, so
      it need not have a simple name.  How about INVISIBLE-BLOCK?  
          GLS and SEF will make a better proposal.

  75. Proposed to flush DEFMACRO-CHECK-ARGS: the macro should
      always perform this error check.  

          Yes.  At least, whether the error check is done
          should be left to the implementation just as
          wrong-number-of-arguments-to-a-function is.

  76. Proposed to flush DEFMACRO-MAYBE-DISPLACE and macro-
      expansion-hook; the former is useless and the latter is not
      general enough.  

          Deferred.

  77. Proposed to make MACROEXPAND-1 be the sole standard hook for
      getting at a macro expanding function; this means MACRO-P
      should not return the macro function.  This allows the
      implementation to provide whatever memoizing scheme
      appropriate.  

          Interesting idea; deferred.  MOON and SEF will make
          proposals on this issue and the previous one.

  78. Need some kind of declaration to locally shadow a globally
      pervasive SPECIAL declaration.  

          The pervasiveness of such declarations must be
          clarified.  Sample code for EVAL should appear in the
          COMMON LISP manual.  GLS will propose such code.

  79. Functions that take two sequences should not accept :START
      and :END, only :START1 and friends, to minimize confusion.  

          Yes.

  80. All built-in MAKE- functions should take keywords.  

          Yes.

  81. There should be only one function for creating hash tables,
      and it should take a :TEST keyword.  

          Yes, but the value for this keyword is restricted to
          a small set of possibilities.

  82. One-argument FLOAT should either always return a single-
      float, or use the format specified by read-default-float-
      format.  

          One-argument FLOAT will always return a SINGLE-FLOAT.

  83. Proposed to make the second argument (the divisor) to MOD and
      REMAINDER required, not optional.  

          Accepted.  The one-argument case is obtained by
          providing 1 as a second argument, which is much
          clearer.

  84. If RANDOM can take two arguments, the first effectively
      optional and the second required, why cannot LOG do the same?
      How about EXP, too?  

          The second (optional) argument to RANDOM shall be
          eliminated.

  85. TRUENAME of a string should look in the file system, not just
      return the string.  

          Yes.

  86. WITH-OPEN-FILE should not be specified to ask the user; if
      anything, it should merely specify that an error is
      signalled.  

          Yes.

  87. The keyword arguments to LOAD should be fixed up in a way to
      be proposed by MOON.  

          MOON will make a specific proposal.

  88. Can DEFUN be used to define properties?  How about more
      general function-specs, as in Lisp Machine LISP?  

          Function-specs are tentatively accepted pending a
          specific proposal.

  89. Let declarations and documentation-strings occur in any order
      in a DEFUN and similar forms.  

          Yes.

  90. Provide a functional interface for accessing documentation
      strings, rather than mentioning the DOCUMENTATION property.  

          Deferred for discussion by network mail.

  91. Clarify the status of the DOLIST/DOTIMES variable when the
      result-form is executed.  Proposed: for DOLIST, variable is
      bound and has NIL as its value; for DOTIMES, variable is
      bound and has as value the value of the countform.  

          GLS will make a proposal.  Feelings were not strong
          except that the issue must be tied down.

  92. Is VALUES a function?  Or should it, like PROGN, really be
      regarded as a special form?  

          It is a function.

  93. Should LOCALLY be retained, or should one simply write (LET
      () ...)?  

          Retain LOCALLY.

  94. Extend THE to handle multiple values?  One way is to provide
      a limited type specifier so that one may write

          (mvcall #'+ (the (values integer integer) (floor x y)))

          Yes.

  95. Should compiler warnings of unrecognized declarations be
      required or merely recommended?  Perhaps required as the
      default, but a switch may be provided?  

          Agrred that it is probably a good idea to require it,
          provided that a declaration may be made to indicate
          that a particular declaration is legitimate.  GLS
          will make a specific proposal.

  96. There should be a kind of FLOAT that accepts a type specifier
      instead of an example of that type.  (But the kind that takes
      an example is useful too.)  

          The function TO should be renamed COERCE, take a type
          specifier as second argument, and be extended to
          other cases such as floats.

  97. Have a function that somehow extracts the fraction from a
      floating-point number and returns it as an integer.
      Proposed: FLOAT-FRACTION-INTEGER takes a floating-point
      number x and returns two integer values; the second is the
      precision p of the representation, and the first is a value j
      such that (= j (SCALE-FLOAT (FLOAT-FRACTION x) p)).  Or
      perhaps this should be two separate functions.  

          MOON will make a proposal.  The term fraction should
          be replaced where appropriate by significand, as
          there can be confusion with integer-part/fractional-
          part.

  98. Flush MASK-FIELD and DEPOSIT-FIELD?  

          No, keep them.

  99. Is the proposed definition of backquote acceptable?  

          The printed copies of the COMMON LISP manual did not
          make backquotes visible.  GLS will send the proposal
          out by network mail for discussion.

 100. STRING-CHARP should be true of <space>.  

          Yes.

 101. Rename GRAPHICP and ALPHAP to GRAPHIC-CHARP and ALPHA-CHARP.
      

          Yes.

 102. Introduce CHAR<=, CHAR>=, and CHAR/=, and let the character
      comparators take multiple arguments as for the numeric
      comparators. (But note that (char<= #\A X #\Z) doesn't
      guarantee that X is a letter.)  

          Yes.

 103. In STRING-CAPITALIZE, should digits count as word-
      constituents, even though they don't have case?  

          Yes.  (This is as in EMACS.)

 104. Do not have both DIGIT-CHARP and DIGIT-WEIGHT.  

          Flush DIGIT-CHARP; let DIGIT-WEIGHT return NIL for a
          non-digit.

 105. Introduce a function MAKE-SEQUENCE taking a type, length, and
      a keyword :INITIAL-VALUE.  

          Yes.

 106. CATENATE or CONCATENATE?  The OED says they are semantically
      identical.  

          CONCATENATE it is.

 107. Be sure to add REMOVE-DUPLICATES and DELETE-DUPLICATES.  
          Yes.

 108. Suggest letting NIL as a return type to MAP mean return no
      value, to get a MAPC-like effect.  

          Yes.

 109. Flush :FROM-END keyword for the COUNT function?  

          No, keep it for consistency, even though it is
          useless.

 110. What should be done about SUBST and SUBLIS?  

          Have four functions SUBST, NSUBST, SUBLIS, and
          NSUBLIS.  All take the usual applicable sequence
          keywords, particularly :TEST.  They do operate on
          cdrs.  The non-destructive versions do maximal
          sharing; (SUBST () () X) will no longer be a good way
          to copy X.  Give a sample definition of SUBST in the
          COMMON LISP manual.

 111. Flush the restriction that the result of SXHASH be
      non-negative?  

          Keep the restriction.

 112. What is the interaction of ARRAY-GROW and displaced arrays?  

          This does the ``obvious right thing''.  Run-time
          access checks may be required if the displaced-to
          array is altered.

 113. Add WITH-INPUT-FROM-STRING and WITH-OUTPUT-TO-STRING?  

          Yes, and note that the created stream has only
          dynamic extent.

 114. Should CLOSE take a simple flag or a keyword argument :ABORT,
      defaulting to NIL?  

          A keyword argument: (CLOSE :ABORT T).

 115. Rename PRIN1STRING to be PRIN1-TO-STRING.  

          Yes.

 116. Rename FILEPOS to FILE-POSITION.  

          Yes.

 117. Rename COMFILE to COMPILE-FILE?  

          Yes.

 118. Reconsider the problem of getting at file attributes, such as
      author.  

          A single function GET-FILE-INFO should take a stream
          and a keyword indicating what is desired.

 119. Add COMPILER-LET?  

          Yes.

 120. Rename the type SUBR to be COMPILED-FUNCTION.  

          Yes.  Also rename SUBRP.

 121. Note that implementations may provide other &-keywords for
      lambda lists (these won't be portable, however).  

          Yes.  Document the variable LAMBDA-LIST-KEYWORDS.

 122. Rename the ONEOF type specifier to be MEMBER.  

          Yes, in principle; tie this to the outcome of MEMBER.
          (This came out all right (issue 7), so I take this to
          be an unqualified yes.)

 123. The syntax of ratios should be clarified.  Proposal:
      ratio ::= [sign] {digit}+ / {digit}+

          Yes.

 124. Proposed to call the page-separator character #\PAGE instead
      of #\FORM.  

          Yes.

 125. Proposed that all COPY functions should be spelled COPY-.  

          Yes, except if a function is named just ``COPY'',
          don't call it ``COPY-''!

 126. Proposed: a new FORMAT directive ~$, as in MACLISP, for
      better control over floating-point number printout.  

          Yes.

 127. Proposed: a new FORMAT directive ~/.../, where ``...'' is a
      picture, for pictorial representation of integer and
      floating-point printout, as in PL/I and COBOL.  Details are
      to be determined, but as possible examples:

          Value      Picture               Result
          65.67      ~/$$,$$$,$$9.V99 CR/  "       $65.67   "
          0.0        ~/$$,$$$,$$9.V99 CR/  "        $0.00   "
          -65432.01  ~/$$,$$$,$$9.V99 CR/  "   $65,432.01 CR"
          -65432.01  ~/$*,***,**9.V99 CR/  "$***65,432.01 CR"
          -65432.01  ~/$Z,ZZZ,ZZ9.V99 CR/  "$   65,432.01 CR"
          6.5        ~/S9.V99999ES99/      "+6.50000E+00"
          .0067      ~/S9.V99999ES99/      "+6.70000E-03"
          456        ~/ZZZZZ9/             "   456"
          456        ~/999999/             "000456/

      Some study of existing picture formats would be necessary.
      If it's done right, EDIT instructions on machines such as the
      IBM 370 and DEC VAX might be applicable.  

          General sympathy for this idea, but GLS must make a
          concrete full proposal.

 128. Proposed: a new FORMAT directive ~U that prints
      floating-point numbers in exponential form, with the exponent
      a multiple of three, and also outputs a standard metric
      prefix such as ``kilo'' to match.  

          Yes, in principle; a complete proposal will be
          discussed by network mail.

 129. Proposed: a new FORMAT directive ~(...~) for case conversion.
      No flags means force to lower case; colon capitalizes all
      words; atsign capitalizes just the first letter, lower-casing
      all others; colon and atsign forces to upper case.  This is
      useful for such things as

          (defun foo (n) (format () "~@(~R~) error~:P detected" n))
          (foo 0) -> "Zero errors detected."
          (foo 1) -> "One errors detected."
          (foo 23) -> "Twenty-three errors detected."

          Yes, but better characters than ``('' and ``)''
          should be found.  GLS will make a proposal.

 130. Proposed: in FORMAT, eliminate ~[ from COMMON LISP, but
      retain the colon and atsign versions.  

          No, keep it.

 131. Proposed: new FORMAT directive ~? to mean that an argument is
      to be interpreted as a control string, as if inserted at that
      point.  This is simpler than remembering to use ~1{~:}.  

          Yes.  Look for a better character.

 132. Proposed: FORMAT directives to perform FORCE-OUTPUT and
      CLEAR-OUTPUT.  

          No!  Fix the manual under FORCE-OUTPUT and
          CLEAR-OUTPUT.

 133. Should SETDIFFERENCE be renamed to be SET-DIFFERENCE?  

          Yes.

 134. Should PUTHASH take arguments in the order key, hash-table,
      value?  

          This is no longer relevant; see issue 10.

 135. Is it all right to make UNION and INTERSECTION take only two
      arguments, in order to accept the :TEST keyword?  

          Yes.

 136. Consider changing BUTTAIL back to LDIFF.  

          Yes.

 137. Is the definition of CHARACTER acceptable?  

          Yes.

 138. Should GCD and LCM take any number of arguments, or exactly
      two?  

          Leave them alone, taking any number.

 139. Should there be a DO-PROPERTIES to complement MAP-PROPERTIES?
      

          Flush them both.

 140. Should REMOB get a better name, say UNINTERN?  

          Yes.

 141. Should there be a DEFPR to replace DEFPROP, or should this
      just be flushed?  
          Flush DEFPROP.

 142. Should RANDOM take a RANDOM-STATE as an optional argument,
      rather than looking at a special variable?  

          Yes; the optional argument defaults to the special
          variable now defined.

 143. Add a SUSPEND function?  What are its defined properties?  

          No.

 144. Make the character-bag in STRING-TRIM optional, defaulting to
      the space character (alternatively, all whitespace
      characters).  

          No; it would be too confusing, and it's not hard to
          specify the bag explicitly.  The character-bag may be
          any sequence containing characters.

 145. Should REMAINDER be renamed REM?  

          No; too much chance of confusion with REMOVE or
          REMPROP.

 146. Should something be done about the fact that BYTE specifiers
      use a start-count (actually, a count-start) convention, while
      the rest of the language uses a start-end convention?  

          This is a duplicate of issue 2.

 147. Syntax for non-decimal floating-point numbers?  

          No.

 148. Shall PRIN1 be required or encouraged to print radix
      specifiers in lower case (e.g., #O instead of #O) for
      readability?  

          Yes, required.

 149. Rename GET-PNAME to SYMBOL-PRINT-NAME.  

          Yes.  Moreover, create a series of five parallel
          names:

              symbol-pname
              symbol-package
              symbol-plist
              symbol-function
              symbol-value

 150. Disposition of BOOLE: should it be as in MACLISP?  Should it
      remain two-argument?  Should it have the hairy EAK
      definition?  

          As far as COMMON LISP is concerned, it takes two
          (that is, three) arguments.

      This is the last of the issues on the original agenda.  The
      following additional items were brought up at the meeting.

 151. Shall FIRST and its friends be provided?  

          The following shall be added to COMMON LISP:  FIRST,
          SECOND, THIRD, FOURTH, FIFTH, SIXTH, SEVENTH, EIGHTH,
          NINTH, TENTH, and REST, all operating on lists only.

 152. Should 1E3 be considered to be floating-point syntax?
      MACLISP says no, Lisp Machine LISP and INTERLISP say yes.  

          Yes.

 153. Should TRACE be a special form or a function?  

          A function, taking a function spec and keyword
          arguments.  A proposal will be forthcoming.

 154. A proposal for CHECK-ARG-TYPE will be made.

 155. Should there be a predicate KEYWORDP?  

          Yes.

 156. There will be a proposal for lambda macros and compiler-only
      macros (optimizers).

 157. Should there be a TREE-EQUAL predicate that takes a :TEST
      keyword for use on leaves?  

          Yes.

 158. Consider the naming conventions of T:

         - XXX? instead of XXXP for predicates.

         - All special variables have names beginning and ending
           with ``*''.

          No action.

 159. Mention prominently in the section on the reader that the
      characters !?[]{} are reserved for user read-macros.

 160. DLW will propose an improved error-handling system.

 161. Should FUNCALL* be eliminated, and APPLY generalized to be
      FUNCALL*?  

          Yes.
-------

∂23-Aug-82  2021	Earl A. Killian <EAK at MIT-MC> 	intern 
Date: 23 August 1982 20:27-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject: intern
To: Common-Lisp at SU-AI

In order to implement point 3, how about a per-package
intern-hook?

∂23-Aug-82  2021	Earl A. Killian <EAK at MIT-MC> 	SET vs. SETF
Date: 23 August 1982 20:21-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject: SET vs. SETF
To: Common-Lisp at SU-AI

It occurs to me that SETF is going to be very common now, with
most of the updating functions flushed.  How about giving it the
name SET, which I take from point 10 has been flushed in favor of
(SET (SYMBOL-VALUE s) value)?  I never did understand what the
"F" stands for anyway.

∂23-Aug-82  2021	Earl A. Killian <EAK at MIT-MC> 	byte specifiers  
Date: 23 August 1982 20:13-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject: byte specifiers
To: Common-Lisp at SU-AI

Can someone explain to me the byte functions take a byte
specifier which encodes the position and size rather than just
taking two separate arguments?  The first thing that comes to
mind is because that's the way the pdp10 (and lispm too?) do it,
which is not much of a reason.

The other reason I can think of is so that you can pass them
around as arguments with one parameter instead of two, but that's
not very compelling either.

∂23-Aug-82  2021	Earl A. Killian <EAK at MIT-MC> 	lowercase in print    
Date: 23 August 1982 20:06-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject: lowercase in print
To: Common-Lisp at SU-AI

     148. Shall PRIN1 be required or encouraged to print radix
	  specifiers in lower case (e.g., #O instead of #O) for
	  readability?  

	      Yes, required.

How about the exponent specifier too, as in 1e6 instead of 1E6?  It makes it
clear to the user that it isn't a symbol since the language uppercases
symbols.

∂23-Aug-82  2029	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	Issue #106    
Date: Monday, 23 August 1982, 20:45-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Subject: Issue #106
To: STEELE at CMU-20C
Cc: common-lisp at SU-AI
In-reply-to: The message of 23 Aug 82 16:12-EDT from STEELE at CMU-20C

According to my notes of the meeting, we agreed on CATENATE rather than
CONCATENATE.  I was sitting at the opposite end of the table from GLS.
Opinions/facts?

∂23-Aug-82  2029	Earl A. Killian <EAK at MIT-MC> 	typep  
Date: 23 August 1982 20:43-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject: typep
To: common-lisp at SU-AI

  40. Shall TYPEP of one argument be renamed to something else, or
      eliminated?  

          Rename it to be TYPE-OF...

Now, how about reversing the argument order of TYPEP?  This makes
it easier to read, especially when the value takes more than one
line to express.

∂23-Aug-82  2034	Guy.Steele at CMU-10A 	Re: byte specifiers   
Date: 23 August 1982 2328-EDT (Monday)
From: Guy.Steele at CMU-10A
To: Earl A. Killian <EAK at MIT-MC>
Subject:  Re: byte specifiers
CC: common-lisp at SU-AI
In-Reply-To:  Earl A. Killian@MIT-MC's message of 23 Aug 82 19:13-EST

The main reason for byte specifiers is so that you can put a single
byte specifier into a variable:
	(DEFCONSTANT %TCBAZ (BYTE 3 17))
	... (LDB %TCBAZ X) ...
instead of having to do this:
	(DEFCONSTANT %TCBAZ-SIZE 3)
	(DEFCONSTANT %TCBAZ-POS 17)
	... (LDB %TCBAZ-SIZE %TCBAZ-POS X) ...
--Guy

∂23-Aug-82  2137	Kent M. Pitman <KMP at MIT-MC> 	SET vs. SETF 
Date: 24 August 1982 00:33-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject:  SET vs. SETF
To: EAK at MIT-MC
cc: Common-Lisp at SU-AI

    Date: 23 August 1982 20:21-EDT
    From: Earl A. Killian <EAK>
    To:   Common-Lisp at SU-AI
    Re:   SET vs. SETF

    It occurs to me that SETF is going to be very common now, with
    most of the updating functions flushed.  How about giving it the
    name SET, which I take from point 10 has been flushed in favor of
    (SET (SYMBOL-VALUE s) value)?  I never did understand what the
    "F" stands for anyway.
-----
I introduced this name change last summer for T (Yale Scheme). We're happy
with the change. I second Killian's motion to do the same in Common Lisp.
-kmp

∂24-Aug-82  0032	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	SET vs. SETF  
Date: Tuesday, 24 August 1982, 03:29-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Subject: SET vs. SETF
To: Kent M. Pitman <KMP at MIT-MC>, EAK at MIT-MC
Cc: Common-Lisp at SU-AI
In-reply-to: The message of 24 Aug 82 00:33-EDT from Kent M. Pitman <KMP at MIT-MC>

Renaming SETF to SET would be a bad idea, because there is a whole
family of xxxF functions.  Some of them are modified versions of
functions without the F, so you can't just take the F off of all
of them.

Back in about 1973 when SETF was part of DEFSTRUCT, its name meant
"set field".  It doesn't exactly mean that any more, of course.

∂24-Aug-82  0042	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	lowercase in print 
Date: Tuesday, 24 August 1982, 03:40-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Subject: lowercase in print
To: Earl A. Killian <EAK at MIT-MC>
Cc: Common-Lisp at SU-AI
In-reply-to: The message of 23 Aug 82 20:06-EDT from Earl A. Killian <EAK at MIT-MC>

    Date: 23 August 1982 20:06-EDT
    From: Earl A. Killian <EAK at MIT-MC>

    How about the exponent specifier too, as in 1e6 instead of 1E6?  It makes it
    clear to the user that it isn't a symbol since the language uppercases
    symbols.
You are absolutely right.

∂24-Aug-82  0907	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	typep 
Date: Tuesday, 24 August 1982, 11:58-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: typep
To: EAK at MIT-MC, common-lisp at SU-AI
In-reply-to: The message of 23 Aug 82 20:43-EDT from Earl A. Killian <EAK at MIT-MC>

I don't think these arguments are strong enough to justify making an
incompatible change in the order of the arguments.  As I discussed at
the meeting, there is still an important criterion of "brain
compatibility" that affects whether a change should be made or not; I
don't want to have to relearn this and change all my code for such weak
reasons.

∂24-Aug-82  1008	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	Results    
Date: Tuesday, 24 August 1982, 13:03-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: Results
To: common-lisp at su-ai

I painstakingly checked your "Results" with my notes; they are nearly
identical.  Here are some comments.

It looks like the concept of "local scope" is no longer used by Common Lisp.
If so, it should be removed from the manual.

On point 38, we did not agree to define (LET (B) ...) to make B unbound;
at least, I didn't hear about it if we did.

On point 110, you forgot to mention explicitly that EQL is the default
for all these functions.

Other than that it looks great.  I've started typing in the new
error system documentation and will send it out as soon as I
can.

∂24-Aug-82  1021	HEDRICK at RUTGERS (Mgr DEC-20s/Dir LCSR Comp Facility) 	a protest    
Date: 24 Aug 1982 1321-EDT
From: HEDRICK at RUTGERS (Mgr DEC-20s/Dir LCSR Comp Facility)
Subject: a protest
To: common-lisp at SU-AI

I would like to protest the decision to allow non-local GO's.  We are
doing our best to make a Common Lisp implementation on the 20 that will
produce code comparable in efficiency to Maclisp.  We are trying to
come up with ways to implement all of the hairy constructs that 
penalize only people who use them.  We have found a way to do this for
multiple values, optional arguments, &REST, etc.  I do not see any way
to implement non-local GO's without effectively turning every PROG
into a CATCH.  I realize that microcoded implementations will have
frames around for everything, and thus that they will have no problem.
But I believe that non-local GO's are not reasonable on conventional
machines.  I thought one constraint on the language design was that it
should not have features that would require conventional implementations
to put in things such as stack frames.  I believe that CATCH and THROW
should accomplish what is intended by a non-local GO, and that it is
more in the spirit of existing Lisp's to do that.  I would also be
willing to settle for a separate kind of PROG that allows that feature.
(If necessary, we will implement it that way, and provide a way to
set things so that this CATCH-PROG is used in place of the normal
PROG for users who really need to do non-local GO's.)

Also, I did not see any decision on closures.  We feel very strongly
that lexical closures are enough, and that the general CLOSE-OVER is
unnecessary.  I believe that I have a way to implement a general
CLOSE-OVER without causing overhead to non-users, but it is so hideous
that no sane person would want to do it.
-------

∂24-Aug-82  1115	Jonathan Rees <Rees at YALE> 	Non-local GO's 
Date: Tuesday, 24 August 1982  13:59-EDT
From: Jonathan Rees <Rees at YALE>
To: Hedrick at RUTGERS
Cc: Common-Lisp at SU-AI
Subject: Non-local GO's

I think that non-local GO's can be implemented at no cost to
non-users, given an appropriate compilation strategy.

Yale's Scheme implementation (also known as T) supports a lexical
catch/throw almost identical to Common Lisp's BLOCK/RETURN-FROM
facility.  The compiler translates lexical throws into direct jumps
(with possible stack adjustment) where this is possible, and uses a more
general CATCH/THROW mechanism where necessary.  I believe that
determining which compilation strategy to use for PROG is the same
problem as that for BLOCK.

If you believe that non-local GO's should be abolished, then you should
also argue against general BLOCK/RETURN-FROM, which I believe will be used
as much if not more than PROG/GO, and so should also allow "efficient"
compilation strategy where its full generality isn't used.

This is perhaps not the place to go into the details of how our compiler
works.  Suffice it to say it can be done (on conventional machines), and
is not particularly hairy.  The key point is that the scope of the GO
tags is lexical, so one can find all the GO's belonging to a particular
PROG.  If no such GO's are from within "uncontrolled" closures then
the compiler needn't use the completely general strategy.

Please let me know if I've misunderstood something about the Common Lisp
spec.

∂24-Aug-82  1209	HEDRICK at RUTGERS (Mgr DEC-20s/Dir LCSR Comp Facility) 	Re: Non-local GO's
Date: 24 Aug 1982 1508-EDT
From: HEDRICK at RUTGERS (Mgr DEC-20s/Dir LCSR Comp Facility)
Subject: Re: Non-local GO's
To: Rees at YALE
cc: Common-Lisp at SU-AI
In-Reply-To: Your message of 24-Aug-82 1423-EDT

My understanding is that they were referring to a dynamic GO.  It
sounds like you are talking about a static one.  If you want to
GO into a lexically enclosing PROG, I have no problem with that at
all (indeed I agree tht it is a good idea).  I read the proposal
as meaning that one could GO to any label in any currently active
PROG.
-------

∂24-Aug-82  1233	FEINBERG at CMU-20C 	Non-local GO's
Date: 24 August 1982  15:32-EDT (Tuesday)
From: FEINBERG at CMU-20C
To: Jonathan Rees <Rees at YALE>
Cc: Common-Lisp at SU-AI, Hedrick at RUTGERS
Subject: Non-local GO's

Howdy!
	I think the Common Lisp programmer could easily live with just
a local GO, and CATCH and THROW.  I see no justification for non-local
GOs at all.  It would seem to me that we would want to discourage the
use of GO in favor of more understandable looping constructs instead
of giving it the  capability to make truly unreadable programs.
Some claim that PROG and GO are sometimes the clearest way to express
a loop, perhaps this is so.  However, I have never seen a piece of
code that would be much clearer by using non-local GOs than some other
control construct.

∂24-Aug-82  1304	FEINBERG at CMU-20C 	SET vs. SETF  
Date: 24 August 1982  16:04-EDT (Tuesday)
From: FEINBERG at CMU-20C
To: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Cc: Common-Lisp at SU-AI, EAK at MIT-MC,  Kent M. Pitman <KMP at MIT-MC>
Subject: SET vs. SETF

Howdy!

    Date: Tuesday, 24 August 1982, 03:29-EDT
    From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
    To:   Kent M. Pitman <KMP at MIT-MC>, EAK at MIT-MC
    cc:   Common-Lisp at SU-AI
    Re:   SET vs. SETF

    Renaming SETF to SET would be a bad idea, because there is a whole
    family of xxxF functions.  Some of them are modified versions of
    functions without the F, so you can't just take the F off of all
    of them.

Looking over my copy of the Colander Edition I find the following xxxF
functions:

SWAPF, EXCHF       -- These are being removed from the language.  Better
		      names are being found.

INCF, DECF	   -- These can be changed to INC and DEC with no name
		      conflict.

PUTF, GETF, REMF   -- These functions seem useless to me.  Why not
		      just 
    Back in about 1973 when SETF was part of DEFSTRUCT, its name meant
    "set field".  It doesn't exactly mean that any more, of course.

∂24-Aug-82  1311	FEINBERG at CMU-20C 	SET vs. SETF  
Date: 24 August 1982  16:10-EDT (Tuesday)
From: FEINBERG at CMU-20C
To: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Cc: Common-Lisp at SU-AI, EAK at MIT-MC,  Kent M. Pitman <KMP at MIT-MC>
Subject: SET vs. SETF

Howdy!
	Sorry, I slipped.  Anyway REMF, PUTF and GETF could be flushed
in favor of allowing lists to be passed to GETPR, PUTPR and REMPR.  I
agree with EAK that we should call SETF SET.

∂24-Aug-82  1432	Scott E. Fahlman <Fahlman at Cmu-20c> 	Issue #106 
Date: Tuesday, 24 August 1982  17:32-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Cc: common-lisp at SU-AI, STEELE at CMU-20C
Subject: Issue #106


My notes show CONCATENATE as the winner.  I prefer that to CATENATE,
though with very low weight on the whole issue.

∂24-Aug-82  1435	Earl A. Killian <EAK at MIT-MC> 	point 122   
Date: 24 August 1982 17:31-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject: point 122
To: common-lisp at SU-AI

Another possibility, instead of using MEMBER or ONEOF, is to
allow (QUOTE <A>) to be a legal type specifier that means exactly
the object <A>.  Exasmple 1: (MEMBER A B C) would be (OR 'A 'B 'C).
Example 2: type LIST could be defined as (OR CONS '()).

∂24-Aug-82  1939	Earl A. Killian <EAK at MIT-MC> 	assert 
Date: 24 August 1982 20:53-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject: assert
To: common-lisp at SU-AI

How about defining
	(ASSERT test)
to be a form that asserts that test is true.  The compiler can

1) compile code to check this
2) depend on it
3) ignore it
4) verify it at compile time (ha ha)

∂25-Aug-82  0146	Robert W. Kerns <RWK at MIT-MC> 	SETF and friends 
Date: 25 August 1982 04:41-EDT
From: Robert W. Kerns <RWK at MIT-MC>
Subject: SETF and friends
To: common-lisp at SU-AI

Rather than trying to eliminate the letter 'F' from the language,
why not consider that this letter 'F' helps clasify all these
special forms as being related?  I always think of 'F' as standing
for 'FORM', as in SET-FORM.  Now, admittedly, 'F' isn't very
obvious, various other sundry functions happen to end in 'F', but
deleting the 'F' seems to be a step in the wrong direction.  If
you have to have change, why not go to SET-FORM rather than
SET?  But personally I'd rather not make this kind of change at
all.

∂25-Aug-82  0957	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	SET vs. SETF  
Date: Wednesday, 25 August 1982, 12:53-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Subject: SET vs. SETF
To: FEINBERG at CMU-20C
Cc: Common-Lisp at SU-AI, EAK at MIT-MC, Kent M. Pitman <KMP at MIT-MC>
In-reply-to: The message of 24 Aug 82 16:10-EDT from FEINBERG at CMU-20C

    Date: 24 August 1982  16:10-EDT (Tuesday)
    From: FEINBERG at CMU-20C

	    Sorry, I slipped.  Anyway REMF, PUTF and GETF could be flushed
    in favor of allowing lists to be passed to GETPR, PUTPR and REMPR.  
Read your manual more carefully.

									I
    agree with EAK that we should call SETF SET.
This is not acceptable to me, for reasons given in my previous message.

∂25-Aug-82  1103	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	Keyword arguments to LOAD    
Date: Wednesday, 25 August 1982, 14:01-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Subject: Keyword arguments to LOAD
To: Common-Lisp at SU-AI

Here is a revised proposal:

Keyword		Default		Meaning

:PACKAGE	NIL		NIL means use file's native package, non-NIL
				is a package or name of package to load into.

:VERBOSE	*LOAD-VERBOSE*	T means print a message saying what file is
				being loaded into which package.

:PRINT-FORMS	NIL		T means print forms as they are evaluated.
				[Do we want this?  It disappeared from the
				latest Common Lisp manual.]

:ERROR		T		T means handle errors normally; NIL means that
				a file-not-found error should return NIL
				rather than signalling an error.  LOAD returns
				the pathname (or truename??) of the file it
				loaded otherwise.

:SET-DEFAULT-PATHNAME	*LOAD-SET-DEFAULT-PATHNAME*
				T means update the pathname default
				for LOAD from the argument, NIL means don't.

:STREAM		NIL		Non-NIL means this is an open stream to be
				loaded from.  (In the Lisp machine, the
				:CHARACTERS message to the stream is used to
				determine whether it contains text or binary.)
				The pathname argument is presumed to be associated
				with the stream, in systems where that information
				is needed.

The global variables' default values are implementation dependent, according
to local conventions, and may be set by particular users according to their
personal taste.

I left out keywords to allow using a different set of defaults from the normal
one and to allow explicit control over whether a text file or a binary file
is being loaded, since these don't really seem necessary.  If we put them in,
the consistent names would be :DEFAULT-PATHNAME, :CHARACTERS, and :BINARY.

∂25-Aug-82  1123	Kim.jkf at Berkeley 	case sensitivity   
Date: 25 Aug 1982 11:11:29-PDT
From: Kim.jkf at Berkeley
Mail-From: UCBKIM received by UCBVAX at 25-Aug-82 11:21:43-PDT (Wed)
Date: 25-Aug-82 11:20:43-PDT (Wed)
From: Kim:jkf (John Foderaro)
Subject: case sensitivity
Message-Id: <60852.16851.Kim@Berkeley>
Via: ucbkim.EtherNet (V3.147 [7/22/82]); 25-Aug-82 11:20:45-PDT (Wed)
Via: ucbvax.EtherNet (V3.147 [7/22/82]); 25-Aug-82 11:21:43-PDT (Wed)
To: common-lisp@su-ai

  I would like to bring up the issue of case sensitivity one last time.  The
lastest version of the Common Lisp manual states that unescaped characters
in symbols will be converted to upper case, and I saw no mention of any way
of turning off this case conversion 'feature'.   From past discussions, I
know that any effort to determine whether case sensitivity is good or bad is
futile.  Thus I would like to take a look at the problem in a different way
and convince you that Common Lisp must be able to be both case sensitive and
case insensitive if it is to be widely accepted and used.

    I believe that a person's feelings about case sensitivity in Lisp is a
function of the operating system he is most at home with.  If your operating
system is case insensitive (by that I mean that file names and most
utilities are case insensitive), then you prefer your Lisp to be case
insensitive.  Most (if not all) pdp-10 os's are case insensitive and since
much lisp work has been done on 10's, it is no surprise that most Lisps are
case insensitive.  Although I don't know everyone on the Common Lisp
committee, my guess is that most of them favor case insensitivity due to
their use of case insensitive Lisps on case insensitive operating systems.

    What about the future?  I'm sure you realize that Vax'es and personal
workstations are going to be everywhere, and that many of them will run Unix
or some descendant.  Unix is case sensitive and if you want Common Lisp to
fit in on a Unix system you have to take that into account.  The fact that
Common Lisp is case insensitive would make it uncomfortable for the Unix
programmer to use.  The fact that everything is converted to UPPER CASE
would make it even worse.  The first thing Unix people would do with Common
Lisp is to hack it to make it case sensitive and convert all the code to
lower case.

    In order to prevent divergence of Unix Common Lisp from other
implementations, I propose this change:

 1) there is a 'switch' which selects whether the reader is case sensitive
   or insensitive.

 2) when the reader is case insensitive, it converts everything to lower
   case.


 I've already mentioned why (1) is important.  The reason that (2) is
important is that it permits someone who has selected case sensitivity to
write 'car' and have it match the correct system function.  Without
everything converted to upper case, the case sensitive programmer would have
to write CAR.  [The first thing most people do around here when they get a
new terminal is to disable the caps-lock key, so typing lots of capital
letters would be a real burden].  The only people hurt by (2) are those case
sensitive systems which favor upper case.  I know of no such systems.

If you disagree with my proposal, please do no disagree for such irrelevant
reasons as:

  (a) your personal dislike for case sensitive systems.  There are people
      out in the world who prefer them and you must think of them.

  (b) your personal dislike for Unix.  It exists and many, many people use
      it.  It is probably the largest 'market' for Common Lisp so you should
      take it seriously.

You may disagree with converting everything to lower case because it will
have a visable affect on what you see on your terminal. Based on what I see
in the Common Lisp Manual and Lisp Machine Manual, I get the feeling that
some people feel that lower case is more readable than upper case.
If people really don't want to see lisp in lower case, then they would have
forced the manual writers to switch to upper case.



					John Foderaro
					


∂25-Aug-82  1243	Alan Bawden <ALAN at MIT-MC> 	SET vs. SETF   
Date: 25 August 1982 15:31-EDT
From: Alan Bawden <ALAN at MIT-MC>
Subject:  SET vs. SETF
To: EAK at MIT-MC
cc: Common-Lisp at SU-AI

Changing SETF to SET at this point seems like the height of gratuity.  If we
are going to get involved in general name changes like this I can generate a
list of about 100 (reasonable) name changes that can keep us busy for months
arguing about their various merits.

∂25-Aug-82  1248	lseward at RAND-RELAY 	case sensitivity 
Date: Wednesday, 25 Aug 1982 12:37-PDT
cc: UCBKIM.jkf at UCB-C70, lseward at RAND-RELAY
Subject: case sensitivity
To: common-lisp at SU-AI
From: lseward at RAND-RELAY

I agree with Foderaro about case sensitivity, i.e. it should be allowed, but
come to a slightly different conclusion.  Given a case sensitive os, I would
like the following to happen.  If I say 'foo' and foo is a reference to a
lisp object, e.g. a function, I want it to match Foo, fOO or FoO. Id's should
not be sensitive to case. However if 'foo' is being passed to the os,
e.g. a file name, case translation should not occur.

Separating these two situation is non-trivial, and probably not completely
possibly.  If such a differentiated approach is not feasible then the user
should have 2 options:
  1) case sensitive or not
  2) if not case sensitive then a choice of folding to either upper or
     lower

Lower case definitely improves readability.  If it is not in the standard
then general acceptance will suffer.

larry seward

∂25-Aug-82  1357	Earl A. Killian            <Killian at MIT-MULTICS> 	set vs. setf
Date:     25 August 1982 1321-pdt
From:     Earl A. Killian            <Killian at MIT-MULTICS>
Subject:  set vs. setf
To:       Common-Lisp at SU-AI

    Date: 25 August 1982 04:41-EDT
    From: Robert W. Kerns <RWK at MIT-MC>

    Rather than trying to eliminate the letter 'F' from the language,
    why not consider that this letter 'F' helps clasify all these
    special forms as being related?  I always think of 'F' as standing
    for 'FORM', as in SET-FORM.

I don't think that this is necessary at all.  They are related merely in
that they store, which their names already imply.  Since the function
that stores without taking a "place" is an endangered species, there is
really no need for the "F".  But if you're serious about this, or just
believe in consistency, then I assume you must be in favor of appending
an "F" to PUSH, PUSHNEW, and POP (and whatever other functions I didn't
notice).  If you don't care about consistency, but don't want to change
the status quo much, then you should at least remove the "F" from
comparitively new functions such as SWAPF, EXCHF, INCF, DECF, etc.
having both PUSH and EXCHF as names seems just plain weird to me.

Moon's complaint is that removing the "F" doesn't work for GETF (there
are no other cases that I can find in the language -- REM has been
renamed REMAINDER, and so is not a problem).  Thus the problem is to
find a good name for the function that gets from a property list (as
opposed to a symbol) (I don't think eliminating the function is the
right way to deal with this either!).  So how about GETPR for this?
REMF would become REMPR for consistency, of course.


P.S. to GLS: the concept index should probably include "place" and cross
reference the functions that use it.

∂25-Aug-82  1434	Earl A. Killian            <Killian at MIT-MULTICS> 	SET vs. SETF
Date:     25 August 1982 1357-pdt
From:     Earl A. Killian            <Killian at MIT-MULTICS>
Subject:  SET vs. SETF
To:       Common-Lisp at SU-AI

    Date: 25 August 1982 15:31-EDT
    From: Alan Bawden <ALAN at MIT-MC>

    Changing SETF to SET at this point seems like the height of gratuity.  If we
    are going to get involved in general name changes like this I can generate a
    list of about 100 (reasonable) name changes that can keep us busy for months
    arguing about their various merits.

Back when the number of objectionable names (to me) in the language was
around 100, I was reluctant to suggest changes (though I have certainly
been guilty of it several times).  I have been pleasantly surprized to
find that most of those names have been fixed in some way or another
(there were 35 or so renamings at the last meeting alone!), to the point
where only nconc (nappend), nreconc (nrevappend), and [f]makunbound
(make-unbound?) really seem objectionable to me.  I decided to suggest
changing SETF now because, 1) SET was no longer around, 2) SETF is going
to be very common (more than it is now) and a shorter, easier to
pronounce name would be nice for writing new code (no reason to go back
and changes all the existing SETF's since there can be a simple synonym),
and 3) Common Lisp is already making a lot of changes in this general
area (e.g. eliminating aset, vset, setplist, etc. etc.).

∂25-Aug-82  1442	Scott E. Fahlman <Fahlman at Cmu-20c> 	SETF, case, etc.
Date: Wednesday, 25 August 1982  17:41-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To: common-lisp at SU-AI
Subject: SETF, case, etc.


There must come a time when finalizing the manual takes precedence over
endless twiddling of the names of various functions.  Last Saturday's
meeting was that time, in my opinion.  We should now go ahead with the
names we have decided upon, and suggestions for name changes on the
basis of someone's idea of good taste or elegance should be considered
out of order.  Only those changes that make some real difference
(avoiding a conflict that we failed to notice before, etc.) should be
considered at this point.

In a similar vein, we considered the issue of case-sensitivity about
nine months ago and settled on the current scheme.  I don't think that
there is much to be gained by reopening the issue now.  Case-sensitivity
is not something that we can leave to individual choice if code is to be
portable, and the overwhelming majority of people wanted to avoid making
Common Lisp case-sensitive.  Several attempts were made to come up with
a coherent scheme to match symbols in a case-insensitive way but to type
them out in whatever case they were first seen in; all of these attempts
failed.

As it stands, intern IS case-sensitive, but the reader upper-casifies
things by default.  It is easy for users to turn off the upper-case
conversion in the reader, and then they have a case-sensitive Lisp.
However, the built-in symbols are all upper-case, so these users have to
type them in that way.  Code that is intended to be portable should
assume the default environment, which does not preserve case.  It is
somewhat arbitrary that upper-case was chosen over lower-case as the
default, but for portability there has to be an internal default and the
tradition of past Lisps won out over the tradition of Unix.

At one point we discussed the possibility of adding a switch to print
symbols preferentially in lower-case.  I can't find this in the current
manual, but it would be easy to add as an implementation-dependent
extension.  This causes no problems, as long as the resulting output is
read back in through the upper-casifying reader.  

-- Scott

∂25-Aug-82  1450	Howard I. Cannon <HIC at MIT-MC> 	Issue #106 
Date: 25 August 1982 17:46-EDT
From: Howard I. Cannon <HIC at MIT-MC>
Subject:  Issue #106
To: Moon at SCRC-TENEX
cc: common-lisp at SU-AI, STEELE at CMU-20C

    Date: Monday, 23 August 1982, 20:45-EDT
    From: David A. Moon <Moon at SCRC-TENEX>
    To:   STEELE at CMU-20C
    cc:   common-lisp at SU-AI
    Re:   Issue #106

    According to my notes of the meeting, we agreed on CATENATE rather than
    CONCATENATE.  I was sitting at the opposite end of the table from GLS.
    Opinions/facts?


We absolutly agreed on CONCATENATE.  I remember you didn't like it.

∂25-Aug-82  1511	Earl A. Killian            <Killian at MIT-MULTICS> 	SETF, case, etc. 
Date:     25 August 1982 1452-pdt
From:     Earl A. Killian            <Killian at MIT-MULTICS>
Subject:  SETF, case, etc.
To:       Fahlman at CMUc
cc:       Common-Lisp at SU-AI

The manual still seems obviously incomplete to me (just look at the
compiler and evaluator chapters), and so I assume there will still be at
least one more meeting, etc. and so some of these ideas are still worth
bringing up.  If that is not the case, then someone should say so.

However, I agree with you on the subject of case sensitivity.  There
were many serious attempts to find a way to preserve case that failed.
Not uppercasing on READ should be easy enough as is so that a special
option shouldn't be necessary, though I'm at a loss as to how to do it
from the description in the current manual (I hope this is a failure in
the documentation and not the language).  Doing this would let you write
code as in

(SETQ MultiWordVarName NIL)

as many Interlisp users do all the time (though possible in Maclisp, it
never caught on).  This is probably not quite what the lowercasers want,
but given the strong feelings of uppercasers, I don't see how any more
can be done.  An option to print in lowercase seems pretty reasonable.

Also, for passing things to the os, which someone was worrying about, I
expect that people will use strings, which are not touched, case-wise.


P.S. to GLS: the manual mentions nonterminating macro chars several
times, but doesn't really say what their real semantics are.  This needs
to be fixed.

∂25-Aug-82  1757	Jim Large <LARGE at CMU-20C> 	SETF, case, etc.    
Date: Wednesday, 25 August 1982  20:56-EDT
From: Jim Large <LARGE at CMU-20C>
To: Earl A. Killian <Killian at MIT-MULTICS>
Cc: Common-Lisp at SU-AI
Subject: SETF, case, etc.

  Of course, your source code is allowed to be lowercase.  The manual
doesn't forbid (SETQ MultiWordVarName NIL) in a program.
								Jim Large

∂25-Aug-82  2013	Scott E. Fahlman <Fahlman at Cmu-20c> 	SET   
Date: Wednesday, 25 August 1982  23:12-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To: common-lisp at SU-AI
Subject: SET


By the way, I completely forgot about SET (in its traditional meaning)
when enumerating the things I would like to spare from the SETF wrecking
ball.  (That's the danger of saying "we should get rid of everything
except... ")  I'd like to keep old-style SET around.  I find
(SETF (SYMEVAL x) value) to be confusing.  This should be legal, but not
the one and only way to do this.

Keeping SET would have the beneficial side-effect of terminating debate
about whether to turn SETF into SET, though I suppose we could swap
them... (or is it swapf them?)

-- Scott

∂25-Aug-82  2328	Kim.jkf at Berkeley 	case sensitivity, reply to comments    
Date: 25 Aug 1982 23:15:45-PDT
From: Kim.jkf at Berkeley
Mail-From: UCBKIM received by UCBVAX at 25-Aug-82 23:23:57-PDT (Wed)
Date: 25-Aug-82 23:22:56-PDT (Wed)
From: Kim:jkf (John Foderaro)
Subject: case sensitivity, reply to comments
Message-Id: <60852.29572.Kim@Berkeley>
Via: ucbkim.EtherNet (V3.147 [7/22/82]); 25-Aug-82 23:22:58-PDT (Wed)
Via: ucbvax.EtherNet (V3.147 [7/22/82]); 25-Aug-82 23:23:57-PDT (Wed)
To: common-lisp@su-ai

  I don't think that it is too late to discuss this issue.  Surely no
implementation or planned implementation depends so much on conversion to
upper case that it would require more than a few minutes to alter it to
convert to lower case and have case sensitivity as an option.

  Scott mentioned that this issue was decided nine month ago and that
      ... the overwhelming majority of people wanted to avoid making
      Common Lisp case-sensitive.
  This brings up two questions:
  1) how many people did you have to argue the side of case sensitive lisps?
  and more importantly:
  2) did anyone suggest the compromise I've proposed which requires both case
  sensitive and case insensitive readers?  If the committee members felt
  that they had to choose between a case sensitive and a case insensitive
  reader, then, based on the committee's composition, it is no surprise that
  they chose a case insensitive one.  


Regarding portability:
    Like Scott, I consider this to be very important.  This is, in fact, the
primary reason why Common Lisp must make the option to be case-sensitive
part of the standard.  If I write a file of case-sensitive code, I need only
put the appropriate eval-when's at the beginning of the code to tell the
loader/compiler to switch to case-sensitive mode when reading the rest of
this file.  That will insure the portability of my code no matter what the
environment is when it is read in.

    If case-sensitivity is not part of the Common Lisp standard, then some
sites will add it as an extension and it will just about guarantee that
their code is non-portable.

Conversion to upper or lower case:
    I agree that it is important to always convert to either lower or to
upper case when in case insensitive mode.  The arguments for each case
(according to Scott) are:

upper: 'tradition of past Lisps'.
    What is is this tradition based on?  Is it based on the brilliant
insight of a Lisp pioneer?  No, of course not.  It is based on the crummy
upper case only teletypes that existed in the old days.  Shall Common Lisp
become a shrine for the memory of the Teletype Model 33?

lower: 'tradition of Unix'
    What is important is not Unix itself, but the fact that as a result of
using Unix many people now enjoy (and expect) case-sensitivity in systems
they use.  If case-sensitivity is put into Common Lisp as an option, then it
is imperative that system symbols be in lower case or else Common Lisp would
be unreasonable to use when case sensitivity is turned on.
If case sensitivity is not put in then the only thing gained by converting
to lower case is readability.


Those are the arguments for upper or lower case.  Since converting to
upper case harms the case sensitive crowd, and  converting to lower
case harms no one but helps the case sensitive crowd, the choice seems
obvious.

    The biggest problem here is inertia.  Just ask yourself whether adding a
case sensitive switch will harm you if you never use it.  Consider that it
will make the Common Lisp a lot more palatable to a large group of users.
And consider that without conversion to lower case in case-insensitive mode,
the case-sensitive mode would be almost useless.

    It seems that too much time has been spent trying to find a middle ground
between case sensitivity and insensitivity, that is one which assumes case
insensitivity and then trys to add features of case sensitive languages
(such as 'what you type in is what gets printed out').  I don't think this
kind of thing is worth the effort. It certainly won't satisfy the
case-sensitive person who feels that Foo and foo are distinct things.

					John Foderaro
					


∂25-Aug-82  2358	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	Splicing reader macros  
Date: Thursday, 26 August 1982, 02:54-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Subject: Splicing reader macros
To: Common-Lisp at SU-AI

I think I forgot to bring this up at the meeting last Saturday.

The cute kludge by which splicing reader macros (reduced to just
reader macros that don't read anything, e.g. comments and unsatisfied
conditionals) identify themselves, namely returning (VALUES), doesn't
work so nicely now that the multiple-value-taking forms with &OPTIONAL
and &REST have been removed from the language.

Someone (Eric Benson?) suggested that such macros call READ tail recursively.
Of course this doesn't work, since you can have situations like
	(COND ((FOO-P A)
	       (BLATZ B)
	       ;Okay, that takes care of B
	       ))
where the macro is followed by a special token, not by an S-expressions.
However, clearly there is a function inside the reader which such macros
could call tail recursively if they could only get at it.  For instance,
in the Lisp machine this is called SI:XR-READ-THING.

I suggest that we come up with a reasonable name for this function, perhaps
something like READ-INTERNAL-TOKEN, document it, and allow reader macros
to call it.  We could document the two values it returns (an object and what
kind of token it is), or we could say that all you can do with the values
is return them to the reader and they are its internal business.

∂26-Aug-82  0014	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	CHECK-ARG-TYPE
Date: Thursday, 26 August 1982, 03:04-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Subject: CHECK-ARG-TYPE
To: Common-Lisp at SU-AI

See p.275 of the 29 July Common Lisp manual and p.275 of the revision
handed out at the Lisp conference.

I suggest that we include CHECK-ARG-TYPE in the language.  Although
CHECK-ARG, CHECK-ARG-TYPE, and ASSERT have partially-overlapping
functionality, each has its own valuable uses and I think all three
ought to be in the language.

Note that CHECK-ARG and CHECK-ARG-TYPE are used when you want explicit
run-time checking, including but not limited to writing the interpreter
(which of course is written in Lisp, not machine language!).

The details:
CHECK-ARG-TYPE arg-name type &OPTIONAL type-string	[macro]

If (TYPEP arg-name 'type) is false, signal an error.  The error message
includes arg-name and a "pretty" English-language form of type, which
can be overridden by specifying type-string (this override is rarely
used).  Proceeding from the error sets arg-name to a new value and
makes the test again.

Currently arg-name must be a variable, but it should be generalized to
any SETF'able place.

type and type-string are not evaluated.

This isn't always used for checking arguments, since the value of any
variable can be checked, but it is usually used for arguments and there
isn't an alternate name that more clearly describes what it does.

∂26-Aug-82  0041	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	Access to documentation strings   
Date: Thursday, 26 August 1982, 03:25-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Subject: Access to documentation strings
To: Common-Lisp at SU-AI

I think I was supposed to make a proposal about this.

Rather than using some specific property, documentation strings should
be stored in an implementation-dependent way and there should be a
function to access them.  There are many reasons for this, including
multiple documented objects with the same name, documented objects whose
name is not a symbol, and implementations where the documentation is not
a Lisp string until you ask for it (it might reside in the compiled code
in a special machine-dependent format, or it might be retrieved from a
separate documentation file, hopefully in a speedy fashion.)

I don't think we need a separate function to get the brief
documentation, it's just (SUBSEQ doc 0 (POSITION #\Return doc)).  I
suggest the present Lisp machine function DOCUMENTATION, extended to be
similar to GET-SOURCE-FILE-NAME.  The latter function could exist in
Common Lisp as well, although perhaps not all implementations want to
remember source files for each function.

Details:
DOCUMENTATION name &OPTIONAL type
  (VALUES string type)

Accesses the documentation string for a named object.  name is the name
of an object, usually a symbol.  type is a symbol for the type of object
(see below), or NIL meaning take any type that is there, preferring a
function if there is one.  There can be multiple objects of different
types with the same name.

The first value returned is the documentation string, and the second value
is the type of object; this is only useful when the type argument was
NIL or unspecified.  If there is no documentation recorded, or no object
known with this name and type, both values are NIL.  This is not an error.

[Here I do not use "object" in the Smalltalk sense.  Would "definition"
be a better word, or does it imply "function" too strongly?]

Names of objects are usually symbols, although any Lisp object (compared
with EQUAL) is allowed.  Function names can be lists when function specs
exist.  User-defined objects could have almost anything as their name.

By special dispensation, the first argument may be a function (interpreted
or compiled) which is equivalent to supplying the name of the function.

The pre-defined object types are DEFUN for a function, special form, or
macro; DEFVAR for a global variable, parameter, or constant; DEFSTRUCT
for a structure.  There are other implementation-dependent types, and
user programs may freely add their own types.  As you can see the naming
convention is to use the name of the principle defining special form,
if there is one.  Object type symbols are deliberately not keywords,
since user-defined types may need to be protected from each other by the
package mechanism.

There is a companion function:
RECORD-DOCUMENTATION name type string

string can be NIL, which means to forget the documentation.  In some
implementations documentation for some types (especially DEFUN) is
not recorded by calling this function, but is stored some other way,
however the user can always call RECORD-DOCUMENTATION.

If people prefer, (SETF (DOCUMENTATION name type) string) would be
acceptable for this.  Note that type should not be optional when setting.

∂26-Aug-82  0058	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	function specs
Date: Thursday, 26 August 1982, 03:53-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Subject: function specs
To: Common-Lisp at SU-AI

Here is some brief background information on function specs in the Lisp machine.
See page 136 in the gray (or blue, depending on whether you are from the North
or the South) Lisp Machine manual for further information.

The basic idea is that it is useful to store functions in more places than
the definition cell of a symbol, and it is dumb to have to make up generated
symbols when doing this.  The most immediate examples are internal functions
lexically nested inside other functions, methods in class/flavor systems, and
unnamed dispatch functions stored on property lists.  It is very useful to
have a name for such functions so that you can edit them, trace them, get
told their name in the debugger, etc.  It's also very useful to make the whole
thing extensible so users can add their own places to stash functions.

The convention is that the name of any function whose name isn't a symbol
is a list whose first element is a keyword identifying the kind of function,
and whose remaining elements are "arguments" to that keyword.  The first element
doesn't have to be an actual keyword; if it makes sense for a function spec
type to be confined to some particular package, the first element can be
a symbol in that package.

The operations on a function spec are defined by the following functions.
The names are fairly self-evident, so check a Lisp machine manual for details.

(FDEFINE function-spec definition &OPTIONAL carefully-flag no-query-flag)
(FDEFINEDP function-spec) => T or NIL
(FDEFINITION function-spec) => a function
(FDEFINITION-LOCATION function-spec) => a locative pointer to a definition cell
(FUNDEFINE function-spec)
(FUNCTION-PARENT function-spec) => NIL or (VALUES name type) of a top-level
  defining form which generated this function, perhaps along with other things.
  It might be a DEFSTRUCT, for example.
(COMPILER-FDEFINEDP function-spec) => returns T if will be fdefinedp at run time
(FUNCTION-SPEC-GET functions-spec indicator) => NIL or property
(FUNCTION-SPEC-PUTPROP function-spec value indicator)

One defines a new function spec type by putting a property on its keyword
symbol.  The property is a function which follows a protocol (which I won't
elaborate here) to implement the functions described above.  There is a default
handler to ease implementation of new function spec types.

The function spec types defined in the Lisp environment from which I am sending
this message are as follows.  I'll list and describe them all just to give you an
idea of how this might be used.  Certainly most of these do not belong in
Common Lisp--the point is that they are extensions implementable through a
predefined general mechanism.  The order is alphabetical, not rational.

(:DEFUN-METHOD name) -- an internal function used by the flavor system in
the implementation of a function named <name>; the internal function is called
directly when certain error checking is known to be unnecessary.

(:HANDLER flavor message) -- the function invoked when a certain message
is sent to an object of a certain flavor.  This is different from :METHOD
because of method inheritance and method combination.  This function spec
mainly exists so you can trace it.

(:INTERNAL function-spec index [name]) -- a function nested inside another
function, named function-spec.  index is a number to keep them unique and
name is an optional name (it exists if LABELS rather than a plain LAMBDA
was used to define the function.)

(:LAMBDA-MACRO symbol) -- the function which expands forms like ((symbol ...)...)

(:LOCATION locative) -- a function stored in a particular cell

(:METHOD flavor [modifiers...] message) -- a method which supplies part of
the behavior for a certain message to objects built out of a certain flavor.

(:PROPERTY symbol property) -- a function stored on a property list

(:SELECT-METHOD function-spec message) -- an internal function generated
by DEFSELECT, nested inside a select-function named function-spec.

(:WITHIN within-function renamed-function) -- a function which masquerades
as another function within a third function.  TRACE uses this.

∂26-Aug-82  0934	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	assert
Date: Thursday, 26 August 1982, 12:32-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: assert
To: EAK at MIT-MC, common-lisp at SU-AI
In-reply-to: The message of 24 Aug 82 20:53-EDT from Earl A. Killian <EAK at MIT-MC>

I would have sworn I saw an ASSERT form in the Colander Edition at one
point, but for some reason I can't find it now.  It's not in the index.
Am I confused?

∂26-Aug-82  0939	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	a protest  
Date: Thursday, 26 August 1982, 12:34-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: a protest
To: HEDRICK at RUTGERS, common-lisp at SU-AI
In-reply-to: The message of 24 Aug 82 13:21-EDT from Mgr DEC-20s/Dir LCSR Comp Facility <HEDRICK at RUTGERS>

Jonathan Rees is right.  Since GO is lexical, the compiler can easily
determine at compile-time whether any of the GOs are non-local, and
decide on that basis whether to generate a catch-frame-like-thing for
the PROG or not.

∂26-Aug-82  1059	Guy.Steele at CMU-10A 	Closures    
Date: 26 August 1982 1315-EDT (Thursday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Closures

I must apologize for failing to raise at the Saturday meeting
the one issue Hedrick had asked me to mention.  He noted that
the requirement for closures over special variables has an impact
on the performance of stock hardware for all special variables,
whether closed over or not, and proposed removing this feature
from the white pages.  How do people feel about this?
--Guy

∂26-Aug-82  1119	David.Dill at CMU-10A (L170DD60) 	splicing macros 
Date: 26 August 1982 1358-EDT (Thursday)
From: David.Dill at CMU-10A (L170DD60)
To: common-lisp at su-ai
Subject:  splicing macros
Message-Id: <26Aug82 135845 DD60@CMU-10A>

You can always do a multiple-value-list and see if it's nil.  Maybe we
need a non-consing way to count return values, as suggested earlier.

The reader for Spice Lisp doesn't have a read-internal-token routine.
	-Dave

∂26-Aug-82  1123	Scott E. Fahlman <Fahlman at Cmu-20c> 	Closures   
Date: Thursday, 26 August 1982  14:22-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To: Guy.Steele at CMU-10A
Cc: common-lisp at SU-AI
Subject: Closures


I'm willing to waste a cycle or two per call even in compiled code to
have the functionality of closures around.  I think that I would use
closures a lot, once they were available, and wonder how we ever lived
without them.  In fact, I meant to raise the question of whether we need
another function to evaluate an arbitrary form in the environment of a
closure.

Having said that, let me also say that I have not yet thought through
the issue of whether the presence of lexical-scope/indefinite-extent
varaibles in the language makes dynamic closures unneccessary.  If the
lexical mechanism does most of the useful things that we would otherwise
have to do with closures (generators, families of active objects with
some shared but non-global state...?), then I would favor dropping the
dynamic closures from the language.  The lexical "closures" would
compile better and, in some sense, be more elegant, since dynamically
closing over only a few specific variables is a crock.  Can the Scheme
hackers out there explain to us which uses of dynamic closure are
subsumed under lexical closure and which uses really need the dynamic
closures?

-- Scott

∂26-Aug-82  1219	Scott E. Fahlman <Fahlman at Cmu-20c> 	Closures (addendum)  
Date: Thursday, 26 August 1982  15:08-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To: Scott E. Fahlman <Fahlman at CMU-20C>
Cc: common-lisp at SU-AI, Guy.Steele at CMU-10A
Subject: Closures (addendum)


By the way, on the Vax, at least, it is not necessarily the case that
dynamic closures cost you an extra cycle on every special variable
reference.  When you pick up a special value, you have to check for the
unbound marker anyway, and if things are arranged properly this same
check can pick off the case of an EVC-pointer or whatever.  So the only
extra cost is on function entry or when you actually do have a ref to a
closed-over or unbound special.  If there were no checking at all of the
special value before returning it, then the EVC-forward check would
indeed cost you a cycle.  Perhaps Hedrick is picking off unbound
specials some other way (or not at all?) and the closure cost is
therefore real to him.  Evan if it cost a whole cycle per special ref,
I'm not sure that that would slow the language down noticeably, since
specials are somewhere down below 10% of all variable refs in the
assorted statistics I've seen.  We can't let them become aribitrarily
bad, but maybe an extra cycle or two is tolerable.

As I said, if we can get the same effect with lexical vars, we should do
it that way.

-- Scott

∂26-Aug-82  1343	Jonathan Rees <Rees at YALE> 	Closures  
Date: Thursday, 26 August 1982  16:34-EDT
From: Jonathan Rees <Rees at YALE>
To: Fahlman at CMU-20C
Cc: Common-Lisp at SU-AI
Subject: Closures

						... Can the Scheme
    hackers out there explain to us which uses of dynamic closure are
    subsumed under lexical closure and which uses really need the dynamic
    closures?

I've been using Scheme-style closures heavily for a couple of years now and
can attest to their wonderfulness.  I am tempted to say that they are
sufficient to handle the applications to which Lisp Machine-style
closures are put, but I say this from a position of weakness since I've
never actually used the other kind of closure.  I suspect that it is the
case that there's no hacker out there who has extensive experience with
both beasts; such a person would be the one to consult.  All I can say
is that I've never felt the need to close over special variables.  Name
the application - I believe Scheme closures work as well if not better
than other kinds.

As many of you know, I'm quite biased on this matter.  But as
implementor and user I strongly advise against the inclusion of Lisp
Machine-style closures in the language.

				Jonathan Rees
				Scheme Hacker

∂26-Aug-82  1428	mike at RAND-UNIX 	RE: CASE SENSITIVITY, REPLY TO COMMENTS  
Date: Thursday, 26 Aug 1982 14:10-PDT
TO: KIM.JKF AT UCB-C70
CC: COMMON-LISP AT SU-AI
SUBJECT: RE: CASE SENSITIVITY, REPLY TO COMMENTS
IN-REPLY-TO: YOUR MESSAGE OF 25 AUG 1982 23:15:45-PDT
 25-AUG-82 23:22:56-PDT (WED).
             <60852.29572.KIM@BERKELEY>
From: mike at RAND-UNIX


I'M GLAD THAT THE ISSUE OF CASE SENSITIVITY WAS REOPENED AS I THINK
THAT THERE IS A REALITY ISSUE HERE THAT IS BEING MISSED.

IF THE STANDARD DOES NOT MAKE PROVISION FOR UPPER/LOWER CASE 
THEN YOU WILL HAVE GUARANTEED THAT FROM DAY ONE THERE WILL BE IMPLEMENTATIONS
OF "COMMONLISP" THAT DO NOT MEET THE STANDARD.  THIS BECAUSE THERE IS A
LARGE COMMUNITY OF COMPUTER AND LISP USERS WHO BELIEVE THAT MOVING
TO UPPER/LOWER CASE IS ONE OF THOSE LITTLE IMPROVEMENTS IN THE LAST TEN YEARS
THAT HAVE MADE WORKING WITH COMPUTERS MORE SWELL.

JOHN HAS OFFERED A VERY REASONABLE IMPLEMENTATION
THAT ALLOWS UPPER-CASE ONLY SYSTEMS TO EXIST AS WELL ALLOWING OTHERS
TO MOVE INTO A MORE FULL-ASCII REALITY. 

MICHAEL WAHRMAN
(LISP USER, NOT IMPLEMENTOR)

∂26-Aug-82  1521	Jonathan Rees <Rees at YALE> 	Closures  
Date: Thursday, 26 August 1982  16:34-EDT
From: Jonathan Rees <Rees at YALE>
To: Fahlman at CMU-20C
Cc: Common-Lisp at SU-AI
Subject: Closures

						... Can the Scheme
    hackers out there explain to us which uses of dynamic closure are
    subsumed under lexical closure and which uses really need the dynamic
    closures?

I've been using Scheme-style closures heavily for a couple of years now and
can attest to their wonderfulness.  I am tempted to say that they are
sufficient to handle the applications to which Lisp Machine-style
closures are put, but I say this from a position of weakness since I've
never actually used the other kind of closure.  I suspect that it is the
case that there's no hacker out there who has extensive experience with
both beasts; such a person would be the one to consult.  All I can say
is that I've never felt the need to close over special variables.  Name
the application - I believe Scheme closures work as well if not better
than other kinds.

As many of you know, I'm quite biased on this matter.  But as
implementor and user I strongly advise against the inclusion of Lisp
Machine-style closures in the language.

				Jonathan Rees
				Scheme Hacker

∂26-Aug-82  1601	Earl A. Killian <EAK at MIT-MC> 	ASSERT 
Date: 26 August 1982 18:48-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject: ASSERT
To: DLW at SCRC-TENEX
cc: Common-Lisp at SU-AI

I don't remember whether ASSERT was in the manual before.  Does
the LISPM already have one?  If not, as a more concrete proposal
I suggest

ASSERT test &optional formatstring &rest formatargs

∂26-Aug-82  1602	David A. Moon <MOON at MIT-MC> 	2nd generation LOOP macro   
Date: 26 August 1982 18:51-EDT
From: David A. Moon <MOON at MIT-MC>
Subject: 2nd generation LOOP macro
To: Common-Lisp at SU-AI
Reply-to: BUG-LOOP at MIT-ML

Here is an extremely brief summary of the proposed new LOOP design, which
has not yet been finalized.  Consult the writeup on LOOP in the Lisp
Machine manual or MIT LCS TM-169 for background information.  Constructive
comments are very welcome, but please reply to BUG-LOOP at MIT-ML, not to
me personally.

(LOOP form form...) repeatedly evaluates the forms.

In general the body of a loop consists of a series of clauses.  Each
clause is either: a series of one or more lists, which are forms to be
evaluated for effect, delimited by a symbol or the end of the loop; or
a clause-introducing symbol followed by idiosyncratic syntax for that
kind of clause.  Symbols are compared with SAMEPNAMEP.  Atoms other than
symbols are in error, except where a clause's idiosyncratic syntax permits.

1. Primary clauses

1.1 Iteration driving clauses

These clauses run a local variable through a series of values and/or
generate a test for when the iteration is complete.

REPEAT <count>
FOR/AS <var> ...
CYCLE <var> ...

  I won't go into the full syntax here.  Features include: setting
  to values before starting/on the first iteration/on iterations after
  the first; iterating through list elements/conses; iterating through
  sequence elements, forwards or backwards, with or without sequence-type
  declaration; iterating through arithmetic progressions.  CYCLE reverts
  to the beginning of the series when it runs out instead of terminating
  the iteration.

  It is also possible to control whether or not an end-test is generated
  and whether there is a special epilogue only evaluated when an individual
  end-test is triggered.

1.2 Prologue and Epilogue

INITIALLY form form...		forms to be evaluated before starting, but
				after binding local variables.
FINALLY form form...		forms to be evaluated after finishing.

1.3 Delimiter

DO	a sort of semicolon needed in odd situations to terminate a clause,
	for example between an INITIALLY clause and body forms when no named
	clause (e.g. an iteration-driving clause) intervenes.
	We prefer this over parenthesization of clauses because of the
	general philosophy that it is more important to make the simple cases
	as readable as possible than to make micro-improvements in the
	complicated cases.

1.4 Blockname

NAMED name		Gives the block generated by LOOP a name so that
			RETURN-FROM may be used.

This will be changed to conform with whatever is put into Common Lisp
for named PROGs and DOs, if necessary.

2. Relevant special forms

The following special forms are useful inside the body of a LOOP.  Note
that they need not appear at top level, but may be nested inside other
Lisp forms, most usefully bindings and conditionals.

(COLLECT <value> [USING <collection-mode>] [INTO <var>] [BACKWARDS]
		[FROM <initial-value>] [IF-NONE <expr>] [[TYPE] <type>])
This special form signals an error if not used lexically inside a LOOP.
Each time it is evaluated, <value> is evaluated and accumulated in a way
controlled by <collection-mode>; the default is to form an ordered list.
The accumulated values are returned from the LOOP if it is finished
normally, unless INTO is used to put them into a variable (which gets
bound locally to the LOOP).  Certain accumulation modes (boolean AND and
OR) cause immediate termination of the LOOP as soon as the result is known,
when not collecting into a variable.

Collection modes are extensible by the user.  A brief summary of predefined
ones includes aggregated boolean tests; lists (both element-by-element and
segment-by-segment); commutative/associative arithmetic operators (plus,
times, max, min, gcd, lcm, count); sets (union, intersection, adjoin);
forming a sequence (array, string).

Multiple COLLECT forms may appear in a single loop; they are checked for
compatibility (the return value cannot both be a list of values and a
sum of numbers, for example).

(RETURN value) returns immediately from a LOOP, as from any other block.
RETURN-FROM works too, of course.

(LOOP-FINISH) terminates the LOOP, executing the epilogue and returning
any value defined by a COLLECT special form.

[Should RESTART be interfaced to LOOP, or only be legal for plain blocks?]

3. Secondary clauses

These clauses are useful abbreviations for things that can also be done
using the primary clauses and Lisp special forms.  They exist to make
simple cases more readable.  As a matter of style, their use is strongly
discouraged in complex cases, especially those involving complex or
nested conditionals.

3.1 End tests

WHILE <expr>		(IF (NOT <expr>) (LOOP-FINISH))
UNTIL <expr>		(IF <expr> (LOOP-FINISH))

3.2 Conditionals

WHEN <expr> <clause>	The clause is performed conditionally.
IF <expr> <clause>	synonymous with WHEN
UNLESS <expr> <clause>	opposite of WHEN

AND <clause>		May be suffixed to a conditional.  These two
ELSE <clause>		might be flushed as over-complex.

3.3 Bindings

WITH <var> ...		Equivalent to wrapping LET around the LOOP.
			This exists to promote readability by decreasing
			indentation.

3.4 Return values

RETURN <expr>		synonymous with (RETURN <expr>)

COLLECT ...		synonymous with (COLLECT ...)
NCONC ...		synonymous with (COLLECT ... USING NCONC)
APPEND, SUM, COUNT, MINIMIZE, etc. are analogous
ALWAYS, NEVER, THEREIS	abbreviations for boolean collection

4. Extensibility

There are ways for users to define new iteration driving clauses which
I will not go into here.  The syntax is more flexible than the existing
path mechanism.

There are also ways to define new kinds of collection.

5. Compatibility

The second generation LOOP will accept most first-generation LOOP forms
and execute them in the same way, although this was not a primary goal.
Some complex (and unreadable!) forms will not execute the same way or
will be errors.

6. Documentation

We intend to come up with much better examples.  Examples are very
important for developing a sense of style, which is really what LOOP
is all about.

∂26-Aug-82  1633	Scott E. Fahlman <Fahlman at Cmu-20c> 	CASE SENSITIVITY, REPLY TO COMMENTS 
Date: Thursday, 26 August 1982  19:31-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To: COMMON-LISP AT SU-AI, KIM.JKF AT UCB-C70
Cc: mike at RAND-UNIX
Subject: CASE SENSITIVITY, REPLY TO COMMENTS


I am trying hard to follow JKF's suggestion that we not degenerate to a
lot of name-calling about case-sensitivity and all the other
brain-damage that Unix inflicts on its users, but such restraint is hard
to maintain given messages like the previous one from Michael Wahrman
(whoever he is).

To respond to JKF's earlier message:

It is probably true that the earlier decision about case sensitivity was
made with little input from the Franz/Unix people, who at the time were
not much interested in Common Lisp, so I suppose it is not entirely out
of order to reopen the issue.

I find JKF's proposed "compromise" incoherent.  For their own private
hacking, users can translate everything into EBCDIC, for all I care, but
what we are talking about here is a standard for portable Common Lisp
code.  I see no coherent way to let some users decide that "FOO" and
"Foo" are distinct symbols in their code and to have other users ignore
the difference.  It has to be uniformly one way or the other at the
interfaces.  Making the language case sensitive, in the sense that
the difference between "FOO" and "Foo" matters, is absolutely
unacceptable to me and, I think, to most of the other Common Lisp
implementors.

I think that what we are proposing is being misunderstood, at least by
some people out there.  We certainly are not requiring that Common Lisp
users ever type in things in upper case -- they can use any mixture of
case characters that they want.  If we add the proposed switch to PRINT,
then people never have to see upper-case on typeout.  The Unix people I
have talked to here at CMU are happy with that much: as long as they
don't have to read or write upper-case, then it is not important to them
to have "FOO" not eq to "Foo".  But if you folks really insist on true
case-sensitivity in portable code, then we've got a serious
disagreement.

-- Scott

∂26-Aug-82  2123	Scott E. Fahlman <Fahlman at Cmu-20c> 	Splicing reader macros    
Date: Thursday, 26 August 1982  20:56-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Cc: Common-Lisp at SU-AI
Subject: Splicing reader macros


Moon's suggestion for READ-TOKEN sounds good to me, even though (as Dill
points out) it is not just a matter of documenting something that
already exists in the case of Spice Lisp.  I can imagine a number of
uses for this function in implementing non-standard parsers.

-- Scott

∂26-Aug-82  2123	Scott E. Fahlman <Fahlman at Cmu-20c> 	2nd generation LOOP macro 
Date: Thursday, 26 August 1982  20:43-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To: BUG-LOOP at MIT-ML
Cc: Common-Lisp at SU-AI
Subject: 2nd generation LOOP macro


Moon's description of LOOP is reasonably clear.  To me, LOOP looks like
a lot of hairy syntax for no reason.  The equivalent DO constructs look
simpler and clearer to me in almost all cases, but then I'm a
conservative -- I don't like CLISP or CGOL either.  People keep coming
up with these things, so there must be a need felt in some quarters to
which I am insensitive.  I would have said that this sort of thing is a
training/transition aid for those not comfortable with Lisp, but
considering the source of this proposal that can't be the true story.

Is there any reason why LOOP should not be a yellow-pages package for
those who like this sort of syntax?

-- Scott

∂26-Aug-82  2128	mike at RAND-UNIX 	Re: CASE SENSITIVITY, REPLY TO COMMENTS  
Date: Thursday, 26 Aug 1982 18:38-PDT
To: common-lisp at SU-AI
Subject: Re: CASE SENSITIVITY, REPLY TO COMMENTS
In-reply-to: Your message of Thursday, 26 August 1982  19:31-EDT.
From: mike at RAND-UNIX

(Apologies to those on the list who have heard this and are sick
of the issue).

Scott,

In fact I HAD misunderstood the issue, so thank you for correcting
me.  I was not intending to propose that atoms and functions be
case sensitive, although I could certainly live with that.  I was
proposing that, at least, CommonLisp be case insensitive so that
I would not have to write CAR in all upper case or have the 
interpreter barf at me.  Clearly you have dealt with this issue to
some degree.

As for who I am: I am a computer graphicist at Robert Abel and
Associates, a special effects and film production company in 
Hollywood. 

Regards,
Michael Wahrman


∂26-Aug-82  2144	Kim.fateman at Berkeley  
Date: 26 Aug 1982 17:38:18-PDT
From: Kim.fateman at Berkeley
To: common-lisp@su-ai

As attested to by the common lisp manual itself, it seems lower case
code looks better;  anyone who deliberately writes code that
includes Foo and foo and FoO with the intention that those items
be the same should get his/her keyboard fixed;  there is a 1 line
UNIX command to map all these to (for example) lower case:

tr A-Z a-z <input > output.

It seems like a rather small burden to portability to insist that
functions be spelled the same way each time, just in case someone
reads a package in to a case-sensitive implementation.

But the point I wanted to make is rather different.  Sometimes
case is rather useful.  In mathematical notation, where most people
refer to items by single symbols (hence the use of Greek, Hebrew, and
rather arcane fonts), the absense of distinction between x, X,  and
to continue...  bold x, bold X, italic x, italic X,  .. would not
be considered seriously, I think.  The default in PDP-10 Macsyma
(upper-caseifies) is quite wrong, but there is a switch to change
it (bothcases:true$)

∂26-Aug-82  2149	Scott E. Fahlman <Fahlman at Cmu-20c> 	Access to documentation strings
Date: Thursday, 26 August 1982  21:17-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To: Common-Lisp at SU-AI
Subject: Access to documentation strings


Moon's proposal for DOCUMENTATION looks good to me.

By the way, I think the "convention" that the first line of the
documentation should be an overview sentence is bad news.  If we want
anything of this sort, the convention should be that the first SENTENCE
(normal English syntax) is an overview.  Those of us stuck on narrow
terminals can't get much onto a "line" and I hate to leave out the CR
and let the line wrap.  My preference would be to forget this whole
overview business -- I don't see much use for it.

-- Scott

∂26-Aug-82  2149	Scott E. Fahlman <Fahlman at Cmu-20c> 	function specs  
Date: Thursday, 26 August 1982  23:01-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To: Common-Lisp at SU-AI
Subject: function specs


I have read Moon's description of the function spec business, and then
went back and read the Gray edition of the Chine Nual, and the whole
thing still looks totally bogus to me.  I just don't see why you want to
confound the notion of where a function-object gets stashed with the
notion of what its name is.  If you want your function to have a name, I
see no reason for the name not to be a symbol -- then you can give it
properties, apply it, and so on.  If you don't want to name the thing,
just use lambda and pass around the function object itself.  If you
stash function objects in funny places, why not just put them there
without all the sound and fury?  What could be more extensible than
that?  I admit that we need something clean to replace the ugly old
Maclisp (DEFUN (symbol property) ...) business, but this proposal seems
like massive overkill and is extremely confusing, to me at least.

-- Scott

∂27-Aug-82  0924	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	ASSERT
Date: Friday, 27 August 1982, 10:56-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: ASSERT
To: EAK at MIT-MC
Cc: Common-Lisp at SU-AI
In-reply-to: The message of 26 Aug 82 18:48-EDT from Earl A. Killian <EAK at MIT-MC>

The Lisp Machine does not already have ASSERT.

∂27-Aug-82  1059	MOON at SCRC-TENEX 	2nd generation LOOP macro
Date: Friday, 27 August 1982  13:28-EDT
From: MOON at SCRC-TENEX
To: Scott E. Fahlman <Fahlman at Cmu-20c>
Cc: BUG-LOOP at MIT-ML, Common-Lisp at SU-AI
Subject: 2nd generation LOOP macro

    Date: Thursday, 26 August 1982  20:43-EDT
    From: Scott E. Fahlman <Fahlman at Cmu-20c>

    Is there any reason why LOOP should not be a yellow-pages package for
    those who like this sort of syntax?
One reason is that it is unlikely that any portable code I write will
do its iteration with PROG or DO.

∂27-Aug-82  1140	MOON at SCRC-TENEX 	splicing reader macros   
Date: Friday, 27 August 1982  14:19-EDT
From: MOON at SCRC-TENEX
To: common-lisp at sail
Subject: splicing reader macros

I should point out explicitly that if you don't document what
"read-internal-token" returns, but only say that READ understands
it, then it can just as well do no I/O and simply return constant
values: whatever READ cares to use as a "no values macro" flag!

∂27-Aug-82  1140	MOON at SCRC-TENEX 	assert    
Date: Friday, 27 August 1982  14:23-EDT
From: MOON at SCRC-TENEX
To: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Cc: common-lisp at SU-AI, EAK at MIT-MC
Subject: assert

    Date: Thursday, 26 August 1982, 12:32-EDT
    From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>

    I would have sworn I saw an ASSERT form in the Colander Edition at one
    point, but for some reason I can't find it now.  It's not in the index.

It's in the addendum that was passed out at the Lisp conference.

∂27-Aug-82  1141	MOON at SCRC-TENEX 	case sensitivity    
Date: Friday, 27 August 1982  14:39-EDT
From: MOON at SCRC-TENEX
to: common-lisp at su-ai
Subject: case sensitivity

Just in case there are people who think that case insensitivity is
wanted by a bunch of old fogeys with model 33 teletypes who haven't yet
experienced the joyous revelation of case sensitivity, let me point out
that when I switched from a case-sensitive system and Lisp to a case-
insensitive one, I found it to be a big improvement.  I think this is
because I don't pronounce the upper-case letters when I speak.

So I'm not willing to believe that case-sensitivity is the wave of the
future.  Nor am I willing to believe that having a monocase system
where the case is always lower case is better (or worse) than having
a monocase system where the case is always upper case.

I don't think we can satisfy everybody on this one.  Let's leave things
the way they are unless someone comes up with a concrete, fully
specified, practical proposal for a set of modes that collectively
satisfy everyone and are not incompatible.

∂27-Aug-82  1140	MOON at SCRC-TENEX 	dynamic closures    
Date: Friday, 27 August 1982  14:16-EDT
From: MOON at SCRC-TENEX
To: common-lisp at sail
Subject:dynamic closures

Let me point out that it is trivial to implement dynamic closures in a
shallow-bound system with no extra cost for special-variable accessing and
storing, as long as you are willing to make BOTH entering a closure and
leaving a closure expensive.  The only reason the Lisp machine did it the
way it did was to enable the normal special-variable unbinding mechanism to
be used to leave the environment of a closure.  Thus only entering a
closure is expensive.  If you're willing to copy values in and out, even
when you throw through the application of a closure, and to search the
binding stack if a closure is called recursively, you don't need invisible
pointers.

In a deep-bound system, you have already paid the cost of closures and
adding them costs nothing, of course.

Certainly many of the uses of dynamic closures are better done with
lexical closures, when you have a full upward-funarging lexical Lisp.
I doubt that no uses for dynamic closures remain (unless you are a
purist like Rees and believe that no uses for dynamic variables
remain).  However, I have not yet thought out the issues.

I have no opinion one way or the other as to whether dynamic closures
belong in the Common Lisp kernel language.  Certainly the Lisp machine
will not get rid of them.

∂27-Aug-82  1219	MOON at SCRC-TENEX 	function specs 
Date: Friday, 27 August 1982  15:00-EDT
From: MOON at SCRC-TENEX
To: Scott E. Fahlman <Fahlman at Cmu-20c>
Cc: Common-Lisp at SU-AI
Subject: function specs

The points are these:

1. To have the names be understood by programs, so that you can manipulate
functions that are stored in odd places the same way you manipulate functions
that are stored in the normal place.  This is why the name of a function and
the name of where it is stored want to be the same.

2. To avoid having to cons up ridiculous symbols, like FOO-INTERNAL-GO0067,
by not requiring the names of all functions to be symbols.

Is it that you disagree that symbols like that are ridiculous, or is that
you don't see what use it is to be able to manipulate (TRACE, for example)
all functions in a uniform way?

∂27-Aug-82  1505	Richard M. Stallman <RMS at MIT-OZ at MIT-AI> 	SET
Date: 27 Aug 1982 1754-EDT
From: Richard M. Stallman <RMS at MIT-OZ at MIT-AI>
Subject: SET
To: common-lisp at SU-AI

I am happy to have SET eliminated from the definition of common lisp,
but that doesn't mean I'm willing to stop supporting it, with its
present meaning, on the Lisp machine.  I don't want to find every
SET in the Lisp machine system, or make the users do so.
-------

∂27-Aug-82  1647	Richard M. Stallman <RMS at MIT-ML>
Date: 27 August 1982 19:47-EDT
From: Richard M. Stallman <RMS at MIT-ML>
To: common-lisp at SU-AI

Wouldn't it be more uniform for the arguments to ARRAY-DIMENSION to put
the array before the dimension number?
Since this function is new, it is no loss to change it.

∂27-Aug-82  1829	JLK at SCRC-TENEX 	2nd generation LOOP macro 
Date: Friday, 27 August 1982  21:25-EDT
From: JLK at SCRC-TENEX
To: MOON at SCRC-TENEX
Cc: BUG-LOOP at MIT-ML, Common-Lisp at SU-AI, 
      Scott E. Fahlman <Fahlman at Cmu-20c>
Subject: 2nd generation LOOP macro

I don't feel like arguing yet again why DO is fundamentally inadequate
and causes you to write unmodular, unreadable code when you use it for
advanced parallel/serial binding and flexibly-sequenced initial, end-test,
and exit clauses, but maybe someone who is more energetic should undertake
this.  I believe it is worth the cost of the keyword syntax.

∂28-Aug-82  0449	Scott E. Fahlman <Fahlman at Cmu-20c> 	2nd generation LOOP macro 
Date: Friday, 27 August 1982  21:11-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   BUG-LOOP at MIT-ML, Common-Lisp at SU-AI
Subject: 2nd generation LOOP macro


    One reason (for putting LOOP in the white pages) is that it is
    unlikely that any portable code I write will do its iteration
    with PROG or DO. -- Moon

Well, it is OK to write a portable package that explicitly requires
something from the yellow pages, so we could still use your portable
code and include the LOOP package with it.  If the white pages are to
include everything that any individual wants to use in his code then we
would have to include CGOL, Interlisp Compatibility Package, Flavors,
three kinds of Smalltalk, Actors, Dick Waters' pseudo-Fortran macros,
etc.

My philosophy on this, such as it is, is that when a package is not
essential and when there is a substantial portion of the community that
has some doubts about the package's merits, it should go into the yellow
pages.  There it can compete in the marketplace of ideas.  Perhaps it
will come to be used by most of us, and we can promote it to the white
pages at that time.  Perhaps something better will come along to fill
the same niche, and in that case we will not be burdened with the
original package forever.  Perhaps users will decide that it is easier
just to do whatever the package does by hand than to remember its
complexities, and in that case we will have spared the readers of the
white pages from dealing with a lot of needless complexity.

I could be wrong, but I think that the proposed LOOP package only
appeals to a few of us.  If so, I think a probationary period in the
yellow pages is appropriate.  If I'm wrong and I'm the last holdout for
the old Lispy syntax, then LOOP should go directly into the white pages;
in that case, I hope it can be documented very clearly, since new users
are going to have to absorb it all.

-- Scott

∂28-Aug-82  0848	MOON at SCRC-TENEX 	Yellow pages   
Date: Saturday, 28 August 1982  11:44-EDT
From: MOON at SCRC-TENEX
To: Scott E. Fahlman <Fahlman at Cmu-20c>
Cc: BUG-LOOP at MIT-ML, Common-Lisp at SU-AI
Subject: Yellow pages

I guess I misunderstood the philosophy then.  If the "yellow pages"
things work in every implementation, just like the "white pages" things,
then I'm happy with LOOP being in the yellow pages.  I don't mind LOOP
not being in the part that we say we will never (well, hardly ever)
change, as long as writers of portable code are not discouraged from
learning about it and using it.

∂28-Aug-82  0853	MOON at SCRC-TENEX 	Order of arguments to ARRAY-DIMENSION   
Date: Saturday, 28 August 1982  11:50-EDT
From: MOON at SCRC-TENEX
Subject: Order of arguments to ARRAY-DIMENSION
to: common-lisp at SU-AI

I agree with Stallman.

∂28-Aug-82  1032	Scott E. Fahlman <Fahlman at Cmu-20c> 	Yellow pages    
Date: Saturday, 28 August 1982  13:30-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Common-Lisp at SU-AI
Subject: Yellow pages


Well, as I have always envisioned it, many yellow pages things (LOOP,
Flavors, CGOL, bigfloats with precision hacking) will be portable Common
Lisp packages that will work in any Common Lisp.  Others (a "universal"
menu system, for example) will provide a common Lispy interface, but
will exist in different versions for different implementations: one
might do menus (as well as is possible) on a 24x80 ASCII screen, another
might do menus on a 3600 with mouse selection, another might use audio
I/O, etc.  Finally, the yellow pages would contain some stuff that is
only for one implementation: a communication package for VAX Unix, for
example.  The documentation would clearly incidate which sort of thing
each package is, and things would be organized so that the universal
packages (that we want people to play with) are not hidden among the
system-dependent hacks.

In this way, I think we retain many of the best features of the
traditional route by which things find their way into Lisp, without as
much chaos as we see currently.  First, someone writes a package for his
own use, then shares it privately with some friends, then cleans it up
and documents it and submits it to the yellow pages, and then, if it
catches on with almost everyone, it goes into the language proper.  The
only novel item here is that we exert some quality control at the point
of entry to the yellow pages.

The yellow pages librarian cannot reject a package because he doesn't
like what it does, but only on the grounds that it is not adequately
documented or that the code is buggy or unmaintainable.  The code in the
yellow pages library must either be public domain or there must be
explicit permission to distribute it together with the rest of the
library.  I would guess that Symbolics, DEC, 3RCC, and others would
also have their own proprietary libraries, modelled after the yellow
pages, of things that they want only thier own customers to have;
hopefully this won't get out of hand and all vendors will realize the
benefits of commonality within the common Lisp community for
utility-like things.

-- Scott

∂28-Aug-82  1100	MOON at SCRC-TENEX 	Order of arguments to ARRAY-DIMENSION   
Date: Saturday, 28 August 1982  11:50-EDT
From: MOON at SCRC-TENEX
Subject: Order of arguments to ARRAY-DIMENSION
to: common-lisp at SU-AI

I agree with Stallman.

∂28-Aug-82  1312	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	COMPILE-FILE  
Date: Saturday, 28 August 1982, 16:09-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Subject: COMPILE-FILE
To: Common-Lisp at SU-AI

COMPILE-FILE (known as COMFILE until last week) needs some of the same keywords
as LOAD, specifically :PACKAGE, :VERBOSE, :PRINT, and :SET-DEFAULT-PATHNAME.
I suggest that the optional argument be changed to an :OUTPUT-FILE keyword as well.
There are likely to be additional implementation-dependent keywords such as :OPTIMIZE.

COMPILE-FILE input-file &KEY :OUTPUT-FILE :PACKAGE :PRINT :SET-DEFAULT-PATHNAME :VERBOSE

∂28-Aug-82  1821	FEINBERG at CMU-20C 	2nd generation LOOP macro    
Date: 28 August 1982  21:21-EDT (Saturday)
From: FEINBERG at CMU-20C
To:   Scott E. Fahlman <Fahlman at CMU-20C>
Cc:   BUG-LOOP at MIT-ML, Common-Lisp at SU-AI
Subject: 2nd generation LOOP macro

	I suspect many people have not seen the complete proposal for
the Common Lisp loop macro yet, and therefore it is premature to
include it in the White Pages until everyone has a chance to examine
the proposal in depth.  I too wonder whether adding additional
syntax to Common Lisp is a good idea for Common Lisp, but would like
to at least see what is being proposed.  

∂28-Aug-82  2049	Scott E. Fahlman <Fahlman at Cmu-20c> 	Closures   
Date: Saturday, 28 August 1982  23:48-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: Closures


I've played a little more with the lexical-closure stuff, and am now
pretty confident that they can do everything I might ever want closures
for.  So unless someone comes up with an example of something useful
that cannot be done with the lexical varaible mechanism, I would vote to
flush dynamic closures from the white pages.  As Moon points out, Lispm
and friends are free to retain this feature if they like.

-- Scott

∂29-Aug-82  0028	ucbvax:<Kim:jkf> (John Foderaro) 	cases. reader poll   
Date: 28-Aug-82 14:44:21-PDT (Sat)
From: ucbvax:<Kim:jkf> (John Foderaro)
Subject: cases. reader poll
Message-Id: <60852.23129.Kim@Berkeley>
Received: from UCBKIM by UCB-UCBVAX (3.177 [8/27/82]) id a00594; 28-Aug-82 16:08:10-PDT (Sat)
Via: ucbkim.EtherNet (V3.147 [7/22/82]); 28-Aug-82 16:08:18-PDT (Sat)
To: common-lisp@su-ai

Re:
    From: MOON at SCRC-TENEX
    I don't think we can satisfy everybody on this one.  Let's leave things
    the way they are unless someone comes up with a concrete, fully
    specified, practical proposal for a set of modes that collectively
    satisfy everyone and are not incompatible.


 Clearly the ball is in your court. Because you favor the status quo you can
 sit and wait forever for the proposal that 'satisfies everyone'.  You know
 as well as I that it will never come.  What I am proposing is a compromise,
 that is something which will require everyone to make a bit of a sacrifice
 so that no one group has to make a huge sacrifice.  I thought that this was
 the 'spirit' of common lisp, if I am wrong please let me know.  I look upon
 Lisp as a tool.  The one small change I've proposed will make it useable by
 a much larger communiity without affecting the current community very much.
   In case you've forgotten, this is what I propose:
   	1) the reader's case-sensitivity is alterable via a switch
	2) when it case-insensitive mode, upper case characters are
	   converted to lower case.
 While this may not be a 'fully specified, concrete proposal', it is the
 minimum required for common lisp to be usable to a case-sensitive user.
 Unless this is agreed upon, I see no reason to go into any greater detail.

 Does the common lisp committee have a formal way of polling its members
 about an issue? (I can't believe that decisions are made based on who flames
 the most on this mailing list).  If so I would like to find out the answer
 to these questions:
 1) Do you think that common lisp should be useable by the people who favor
    case-sensitive systems?

 2) Do you think that converting all characters to lower case is too great a
    sacrifice to be expected of case-insensitive users just to satisify
    case-sensitive users?

Just in case there is no formal way to poll the members, please send your
answers to me.  Be sure to indicate whether you consider yourself to be on
the official committee or whether you are just on this mailing list for fun.
If I get an overwhelming NO vote I will never bring up this subject again on
this mailing list.





∂29-Aug-82  0839	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	Circular structure printing    
Date: Sunday, 29 August 1982, 11:33-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: Circular structure printing
To: common-lisp at su-ai

I have become a bit concerned about the Common Lisp feature that
says that the printer should be able to deal with circular list
structure in the manner specified.  One thing that is particularly
worrisome is that the first occurence of a shared cons cell has
to be prefixed with a marker, which means that you have to do
complete lookahead at the entire Lisp object you are printing before
you can output the first character.  Is this really intentional?
Does anybody have a printer that does this?  May I examine the
code, please?  Are the requirements in runtime and storage
really acceptable?  It's quite possible that there's nothing
to worry about and somebody has a great solution, but I'd like
to see it.  Thanks.

∂29-Aug-82  0853	Scott E. Fahlman <Fahlman at Cmu-20c> 	Circular structure printing    
Date: Sunday, 29 August 1982  11:54-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Cc:   common-lisp at SU-AI
Subject: Circular structure printing


The inefficiency that you point to is precisely the reason we hung
circular printing under the PRINCIRCLE switch.  If you turn that on,
printing becomes an expensive process, but if you need it, it's worth
the price.

Actually, the right move when this switch is on is probably to just go
ahead and print the stuff normally, looking for circularities as you go,
but into a string rather than out to the screen.  If you make it
through, just dump the string; if you hit a circularity, go back and do
it over.  Then, when the switch is on, it only costs you maybe a factor
of two unless you do in fact hit a circularity.

-- Scott

∂29-Aug-82  0958	Scott E. Fahlman <Fahlman at Cmu-20c> 	cases. reader poll   
Date: Sunday, 29 August 1982  12:57-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: cases. reader poll


Since JKF asks about our voting procedures, and since there seem to be
some new folks on the list, we should review the decision procedures
under which we have been working.

Basically, the decisions have been made by attempting to find consensus
among those working on actual Common Lisp implementations.  Such people
obviously have a sort of veto power -- if some feature is truly
unacceptable to them, they have the ability to walk out and take their
implementation with them.  We have also actively sought advice and
suggestions from selected people whom we believe to have good ideas or
who represent important constituencies whose concerns we want to be
aware of, including the Franz/Unix folks.

On trivial issues, for example whether a function should be named
CONCATENATE or CATENATE, we have voted, since it is clear that nobody is
going to walk out over such a thing; on more important issues, we have
so far been able to reach consensus among the implementors, though we
have each had to compromise on a few things for the sake of the overall
effort.  It these more important debates, it definitley has not been the
case that majority rules.  In the end, it comes down to what Guy decises
to put into the manual and who walks out as a result of that decision;
so far, this power has only been used once, to resolve the impasse over
the symbolness of NIL, and nobody walked out.  That's an amazingly good
record, I think.

This is all sort of like the U.N. -- the General Assembly debates, but
the Security Council is the only body that can send in the troops, and a
few major members have veto power.  This may seem undemocratic, but it
is the most democratic thing I know of that is still likely to produce a
Common Lisp.

So a vote is not the way to decide this, but some additional input from
the unix people would be welcome.  In particular, I would be
interested in whether JKF really is properly characterizing the unix
community, a group that we do want to keep aboard.  It is my suspicion
that, while unix people do not want to type anything in upper case or
see upper case output (we already can handle that), only a few of them
would find Common Lisp unacceptable because "Foo" and "foo" map into the
same symbol.  Some unix people -- not all of them, I bet -- might prefer
case-sensitivity, but that is different from the issue of whether Common
Lisp is "usable by people who favor case-sensitive systems".  That
strikes me a clear over-statement of the case, though I could be wrong
about this.

-- Scott

∂29-Aug-82  1007	Scott E. Fahlman <Fahlman at Cmu-20c> 	case-sensitivity and portability    
Date: Sunday, 29 August 1982  13:07-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: case-sensitivity and portability


Since JKF is aksing people to form an opinion on his "compromise" plan
and vote on it, I would like to put the following arguments on the
record.  (Originally, I sent these to him alone.)  They explain why I
don't view his plan as a minor compromise, and why I oppose it.

-- Scott

----------------------------------------------------------------------

John,

Yes, I understand what you are proposing.  My objection is approximately
(but not quite) your case 2: I am afraid that if we make it both easy
and legal for portable Common Lisp code to be case-sensitive, then a few
people will start writing case-sensitive packages.

It is not that I object to having to type in an occasional backslash or
capital letter, but rather that I object (strenuously) to having to
remember the case of every symbol around.  As long as case-insensitivity
is the universal rule at the interfaces to all packages, then I can go
on living by the simple rule that case is just ignored, and I can type
things in however I like.  But if I load 100 packages and 3 of them have
case-sensitive symbols in them, then I am perforce living in a
case-sensitive Lisp and I have to remember which symbols have to have
slashified upper-case in them and where.  This is what I object to.
Like pregnancy, you can't be a little bit case-sensitive -- either you
are or you aren't.

You say that I shouldn't impose my biases onto the people who like
case-sensitivity.  Well, under the scheme you propose, they are imposing
their biases on me.  Someone has to be imposed upon here, and better you
than me.

You say that I should be grateful for any case-sensitive code that is
written in Common Lisp and therefore should put up with the hassle.  I
guess this comes down to our differing estimates of how attached people
are to case-sensitivity.  If Common Lisp is officially case-insensitive,
would the case-sensitive people refuse to use it?  I don't think so.  It
has been my experience that people who hate case-sensitivity hate it
passionately, and that the people who like it think it's sort of cute
but not a life-or-death matter.  (This issue is distinct from the issue
of whether users ever have to type or see upper-case -- lots of people
ARE passionate about that.)  So I think that if case-insensitivity is
made the law, people will easily adapt and we will avoid creating two
sub-cultures with a case-sensitive interface between them.  Then I can
be grateful for the same code and not have to curse it for bringing case
into an otherwise beautiful universe.  Of course, all of this is only
based on my discussions with CMU Unix types; the Berkeley species may be
more rabid about all this.

I should emphasize that there is a reason for the asymmetry noted above
-- if it were just that the case-insensitive crowd was being arbitrary
and unreasonable, we would not want to reward that.  If you are used to
case-insensitive systems, as most non-Unix people are, then when you go
to a case-sensitive system you make errors with almost everything you
type.  That's one reason why it is so hard to use Unix only occasionally
-- either you use Unix a lot or the case-sensitivity drives you nuts.
But if you are used to case-sensitive systems and move to one that
ignores case, there's no problem referring to things.  The only problem
is that when you want to do some cute naming hack like the one you
mentioned in your mail, you have to think of some way other than case to
distinguish between "Car" and "car".  As they say, that seems like a small
price to pay for portability.

So that's why I'm opposed to letting each file select whether case
sensitivity is on or off and why I want to impose my own biases on
everyone when it comes to the standard for portable Common Lisp code.

-- Scott

∂29-Aug-82  1027	David.Dill at CMU-10A (L170DD60) 	keyword args to load 
Date: 29 August 1982 1256-EDT (Sunday)
From: David.Dill at CMU-10A (L170DD60)
To: common-lisp at SU-AI
Subject:  keyword args to load
Message-Id: <29Aug82 125648 DD60@CMU-10A>

The :package and :verbose keywords make assumptions about the package
system, which is inappropriate given that we haven't standardized on
a package system.

	-Dave

∂29-Aug-82  1153	MOON at SCRC-TENEX 	keyword args to load
Date: Sunday, 29 August 1982  14:41-EDT
From: MOON at SCRC-TENEX
To: David.Dill at CMU-10A (L170DD60)
Cc: common-lisp at SU-AI
Subject: keyword args to load

    Date: 29 August 1982 1256-EDT (Sunday)
    From: David.Dill at CMU-10A (L170DD60)

    The :package and :verbose keywords make assumptions about the package
    system, which is inappropriate given that we haven't standardized on
    a package system.

The :verbose keyword doesn't.  The :package keyword does, so it might be
left out for now or marked as "will exist when packages do" as the colon
character is.

∂29-Aug-82  1205	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	Circular structure printing    
Date: Sunday, 29 August 1982, 15:00-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: Circular structure printing
To: Fahlman at Cmu-20c
Cc: common-lisp at SU-AI
In-reply-to: The message of 29 Aug 82 11:54-EDT from Scott E. Fahlman <Fahlman at Cmu-20c>

I realize that it's optional.  I guess it doesn't really matter if it is
extremely inefficient.  I'd still feel better if I could see an
implementation and try it out.

∂29-Aug-82  1221	Earl A. Killian <EAK at MIT-MC> 	SET    
Date: 29 August 1982 15:18-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject:  SET
To: RMS at MIT-OZ
cc: common-lisp at SU-AI

Alright, reusing the name SET may be a bad idea for backward
compatibility, so the question becomes whether new functions
should have "F" appended to their names.  I say no.  There is
already the precedent of PUSH/POP which would have to be renamed
to be consistent.  It is easier (and more aesthetic I think) to
instead remove the "F" from the new functions, such as EXCHF,
SWAPF, GETF, INCF, etc.

∂29-Aug-82  1502	Scott E. Fahlman <Fahlman at Cmu-20c> 	function specs  
Date: Sunday, 29 August 1982  18:02-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   MOON at SCRC-TENEX at MIT-AI
Cc:   Common-Lisp at SU-AI
Subject: function specs


Dave,

I do find FOO-INTERNAL-G0067 to be ridiculous.  My problem is that I
find (:INTERNAL 67 FOO) or whatever to be equally ridiculous, and
perhaps worse since it requires a whole new family of functions to
access these pseudo-names.

There is no problem in manipulating functions, since they are perfectly
good Lisp objects that can be passed around at will.  The problem with
TRACE and friends is that they cannot do their magic directly to a
function object (well, they could, but it would take an invisible pointer).
Instead, they need to interpose something between a function and
everyone who wants to get at it, and that requires knowing where other
code expects a function to be found.  To me, that still doesn't mean
that the place where a function conventionally lives is in any sense its
"name", except where that place happens to be the definition cell of a
symbol.

One possible approach is to recognize that TRACE does not hack a function,
but rather a place where a function normally lives.  If we give TRACE a
symbol, it just hacks the definition cell.  Otherwise, the argument to
TRACE is a SETF-like expression that points to a place: property list,
entry in an array, or whatever.  That place must contain a function
object, and the encapsulated form is left there in place of the
original.  So (TRACE place) => (SETF place (TRACE-ENCAPSULATE place)).
To me, this seems much more natural than creating this new series of
"names" and a new family of function name hacking forms.

Similarly, if the "name" argument to DEFUN is not a symbol, it would
also be a SETF location:

(defun (get 'foo 'testing-function) ...)

In this case, the definition is an anonymous LAMBDA or the compiled form
thereof, and the DEFUN does the obvious SETF.  If the compiled
function-object wants to remember that it was being sent to some
particular location at the time of its DEFUN, so much the better; that
is useful info for the debugger, but it is not a name.

Maybe all we disagree about is whether to call the location expression a
name and whether to make up a whole new syntax for it, rather than using
the SETF syntax.

-- Scott

∂29-Aug-82  1820	Kim.fateman at Berkeley  
Date: 29 Aug 1982 18:17:22-PDT
From: Kim.fateman at Berkeley
To: common-lisp@su-ai

Subject: loop, case, consensus

It might be appropriate to mention the way that Franz now accomodates
various loop packages (simultaneously): there are compile-time packages
that allow (at least) 3 different, and sometimes conflicting loop packages
(from maclisp, UCI lisp, interlisp) to be used in the same run-time
environment. This has enabled us to provide "portability"
in a useful fashion.
Using different packages simultaneously, interpreted, is not supported.

I hope that common lisp supports portability at least as well.
.......
My vote on case-sensitivity is with jkf, for reasons that I have previously
expressed.  For the record, I used case-insensitive Lisps exclusively
from 1967 to 1978.  People who use MultiPLe CaSes Expecting THem to be
Mapped to a SinGle CASe should be asked to map them to a single case
(I prefer lower) before providing them as portable packages.
I also think that Roman, Italics, Greek, Boldface, etc if available
should also be distinct from each other. 
Mathematicians have found this useful even before the TTY33.

.......
I am concerned about consensus on availability of packages.

1. It seems to me that any "package" which is not made 
freely available in at least one correct and complete implementation
in a form based only on the CL kernel (the white pages?) should not
be described in the extended manual (the yellow ?).

2. Stuff which runs under only on one current
environment because of OS hooks not in CL should be allowed in
the yellow pages only if the code to make it run is freely available
(in case someone else has a similar OS).

Do we have code for all the yellow pages now?

Do people agree with view 1?  Perhaps my own experience with software
licensing has made me suspicious on some aspects of software sharing.

∂29-Aug-82  1830	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	macro expansion    
Date: Sunday, 29 August 1982, 21:26-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Subject: macro expansion
To: Common-Lisp at SU-AI

Here is my promised proposal, with some help from Alan.

MACRO-P becomes a predicate rather than a pseudo-predicate.
Everything on pages 92-93 (29July82) is flushed.

Everything, including the compiler, expands macros by calling MACROEXPAND
or MACROEXPAND-1.  A variable, *MACROEXPAND-HOOK*, is provided to allow
implementation of displacing, memoization, etc.

The easiest way to show the details of the proposal is as code.  I'll try to
make it exemplary.

(DEFVAR *MACROEXPAND-HOOK* 'FUNCALL)

(DEFUN MACROEXPAND (FORM &AUX CHANGED)
  "Keep expanding the form until it is not a macro-invocation"
  (LOOP (MULTIPLE-VALUE (FORM CHANGED) (MACROEXPAND-1 FORM))
	(IF (NOT CHANGED) (RETURN FORM))))

(DEFUN MACROEXPAND-1 (FORM)
  "If the form is a macro-invocation, return the expanded form and T.
  This is the only function that is allowed to call macro expander functions.
  *MACROEXPAND-HOOK* is used to allow memoization."
  (DECLARE (VALUES FORM CHANGED-FLAG))

  (COND ((AND (PAIRP FORM) (SYMBOLP (CAR FORM)) (MACRO-P (CAR FORM)))
	 (LET ((EXPANDER (---get expander function--- (CAR FORM))))
	   ---check for wrong number of arguments---
	   (VALUES (FUNCALL *MACROEXPAND-HOOK* EXPANDER FORM) T)))
	(T FORM)))

;You can set *MACROEXPAND-HOOK* to this to get traditional displacing
(DEFUN DISPLACING-MACROEXPAND-HOOK (EXPANDER FORM)
  (LET ((NEW-FORM (FUNCALL EXPANDER FORM)))
    (IF (ATOM NEW-FORM)
	(SETQ NEW-FORM `(PROGN ,NEW-FORM)))
    (RPLACA FORM (CAR NEW-FORM))
    (RPLACD FORM (CDR NEW-FORM))
    FORM))

The above definition of MACROEXPAND-1 is oversimplified, since it can
also expand other things, including lambda-macros (the subject of a separate
proposal that has not been sent yet) and possibly implementation-dependent
things (substs in the Lisp machine, for example).

The important point here is the division of labor.  MACROEXPAND-1 takes care
of checking the length of the macro-invocation to make sure it has the right
number of arguments [actually, the implementation is free to choose how much
of this is done by MACROEXPAND-1 and how much is done by code inserted into
the expander function by DEFMACRO].  The hook takes care of memoization.  The
macro expander function is only concerned with translating one form into
another, not with bookkeeping.  It is reasonable for certain kinds of
program-manipulation programs to bind the hook variable.

I introduced a second value from MACROEXPAND-1 instead of making MACROEXPAND
use the traditional EQ test.  Otherwise a subtle change would have been
required to DISPLACING-MACROEXPAND-HOOK, and some writers of hooks might get
it wrong occasionally, and their code would still work 90% of the time.


Other issues:

On page 93 it says that MACROEXPAND ignores local macros established by
MACROLET.  This is clearly incorrect; MACROEXPAND has to get called with an
appropriate lexical context available to it in the same way that EVAL does.
They are both parts of the interpreter.  I don't have anything to propose
about this now; I just want to point out that there is an issue.  I don't
think we need to deal with the issue immediately.

A related issue that must be brought up is whether the Common Lisp subset
should include primitives for accessing and storing macro-expansion
functions.  Currently there is only a special form (MACRO) to set a
macro-expander, and no corresponding function.  The Lisp machine expedient of
using the normal function-definition primitive (FDEFINE) with an argument of
(MACRO . expander) doesn't work in Common Lisp.  Currently there is a gross
way to get the macro expander function, but no reasonable way.  I don't have
a clear feeling whether there are programs that would otherwise be portable
except that they need these operations.

∂29-Aug-82  2034	Scott E. Fahlman <Fahlman at Cmu-20c>   
Date: Sunday, 29 August 1982  23:33-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Kim.fateman at UCB-C70
Cc:   common-lisp at SU-AI


I don't think we want to restrict the yellow pages quite as much as
Fateman suggests.  In particular, we will want to include packages that
depend on other yellow pages packages and not just on the Common Lisp
kernel.  The only criteria should be (1) that everything that you need
in order to run a yellow-pages package should be available in Common
Lisp itself or somewhere in the yellow pages and (2) that all
inter-package dependencies are very clearly documented.  Maybe we need a
few more colors of pages to separate universally portable stuff, stuff
that depends on other packages, stuff that depends on
implementation-specific hacks, etc., but this color business is getting
out of hand.  The important thing is to make it clear exactly what the
game is for any given package and to try to keep things as coherently
organized as possible.

I agree with Fateman that things should not be described in the yellow
pages document unless the source code is available and can be freely
distributed with the yellow pages library.  We don't want to this
document to be an advertising service for proprietary packages, though
such advertising might form a useful document in its own right.

At present, a few of the planned yellow-pages packages are being written
or exist in earlier incarnations, but there is not as yet anything worth
calling a library.  Obviously, the yellow pages cannot exist until the
white pages have been stable for awhile and some correct Common Lisp
implementations are running.  We will want to be very careful with the
documentation of the first yellow pages release so as to get all of this
off on the right foot.

-- Scott

∂29-Aug-82  2056	Scott E. Fahlman <Fahlman at Cmu-20c> 	macro expansion 
Date: Sunday, 29 August 1982  23:56-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Common-Lisp at SU-AI
Subject: macro expansion


At the meeting it was decided that Moon and I would each try to come up
with an improved macro expansion paradigm.  I like what Moon proposes,
so I will not be producing a proposal of my own on this.  The new
proposal is clearly better than the system in the white pages now (which
I originally proposed as a minor improvement over the current Maclisp
system).

The only quibble I have is whether we want to spell *MACROEXPAND-HOOK*
with the stars.  We should only do this if we decide to spell all (or
almost all) built-in global hooks this way.  I am neutral on this issue.

-- Scott

∂29-Aug-82  2141	Kent M. Pitman <KMP at MIT-MC>
Date: 30 August 1982 00:37-EDT
From: Kent M. Pitman <KMP at MIT-MC>
To: Moon at SCRC-TENEX
cc: Common-Lisp at SU-AI

    Date: Thursday, 26 August 1982, 03:25-EDT
    From: David A. Moon <Moon at SCRC-TENEX>
    To:   Common-Lisp at SU-AI
    Re:   Access to documentation strings

    ... I don't think we need a separate function to get the brief
    documentation, it's just (SUBSEQ doc 0 (POSITION #\Return doc)).... 
-----
This same sort of argument is what brought us cute little idioms like
(APPEND X NIL), (SUBST NIL NIL X), (PROG2 NIL X ...), etc.

I think it's worthwhile to provide two fields of the documentation for the
following reasons: 

* Abstraction. A documentation string should be just what it sounds like: a
  string which is not intrinsically interesting to the machine -- just
  something which can be typed at an intelligent entity (eg, a human) to
  provide insight into something.

  It should, as such, have no structure which is not explicit in its
  representation. If there's something magic about the first line and the
  remaining lines, that distinction should be apparent in the
  representation.

* Simplicity. It's a useful case which documentation code might want to do 
  frequently. The user should not be bothered with string-hacking if all he 
  wants to do is get documentation. Might as well make the common case easy.

* Efficiency. Not so much a concern on a large address space machine, but
  still worth considering: Consing should not be required to access a
  documentation string. Experience with Emacs has shown that in certain
  space-critical situations, it's a win to be able to access documentation
  when the rest of the world has ceased to run because of lack of free
  space so you can find the function you need in order to correct the
  problem.
-----
    ... The pre-defined object types are DEFUN for a function, special form, or
    macro; DEFVAR for a global variable, parameter, or constant; DEFSTRUCT
    for a structure....
-----
I would rather see the object types relate to the intended use of the
definition rather than the form used to create the use. eg, MACRO and DEFMACRO
both create the same type of object; indeed, if you do one and then the
other the documentation should overwrite one another just as a (MACRO ...)
and (DEFMACRO ...) form would overwrite each other in LispM lisp. Similarly
for variables: I would feel uncomfortable about giving something a DEFVAR
type documentation if I had not DEFVAR'd it. Suppose I had DEFCONST'd it or
just SETQ'd it. Isn't that good enough? I'd rather see the names be something
like :VARIABLE, :MACRO, :FUNCTION, :STRUCTURE, :SPECIAL-FORM, etc.
rather than the name of the typical form that would create the documentation 
type as currently proposed.
-kmp

∂29-Aug-82  2148	Kent M. Pitman <KMP at MIT-MC> 	Access to documentation strings  
Date: 30 August 1982 00:45-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject:  Access to documentation strings
To: Fahlman at CMU-20C
cc: Common-Lisp at SU-AI

    Date: Thursday, 26 August 1982  21:17-EDT
    From: Scott E. Fahlman <Fahlman at Cmu-20c>

    ... If we want anything of this sort, the convention should be that the 
    first SENTENCE (normal English syntax) is an overview.  My preference
    would be to forget this whole overview business -- I don't see much use 
    for it.
-----
Experience with Teco/Emacs shows that these two types of documentation are
tremendously useful to users. I have to say, though, that I've been bothered
on innumerable occasions by the restriction of its being a one-liner. 
Exceptions always come up. The same would happen with a one-sentence 
restriction only worse because finding the end of a sentence is a natural
language task in some cases. I really think having a mechanism that its 
structure is necessary to make the thing useful.

∂29-Aug-82  2337	Kent M. Pitman <KMP at MIT-MC> 	No PRINT-time case conversion switch please!    
Date: 30 August 1982 02:33-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject: No PRINT-time case conversion switch please!
To: EAK at MIT-MC, COMMON-LISP at SU-AI, Fahlman at CMU-20C

A switch that decides the case to use at PRINT time can never do the
right thing. it will work ok for expressions like:

USER INPUT	SYSTEM (normal)		SYSTEM (with funny flag)

  X			X			x
  x			X			x

but consider:

 |x|			|x|			x
 |X|			X			x

These can potentially be wrong. Presumably the `normal' column above
is to print out for re-read by a Maclisp style (uppercasing) reader so that

 User inputs |x|
 System outputs |x|
 System later re-reads |x| ;win

 User inputs |X|
 System outputs X
 System later re-reads |X| ;win

Where if you do translation to lowercase at print-time and later think your
output is re-readable acceptably by some implementation that downcases instead
of upcases, you'll find that

 User inputs |x| into uppercasing system
 System outputs |x|
 Lowercasing system later re-reads x ;win

 User inputs |X| into uppercasing system
 System outputs X
 Lowercasing system later re-reads x ;lose!

This is disappointing because the |...| clearly denote the user's intent that
the system not muck with the case of his input. This case actually comes up in
practice. Consider Maclisp programs like:

(defun get-input ()
  (terpri)
  (princ '|INPUT: |)
  (read))

(defun yes? ()
  (memq (readch) '(/y /Y)))

Say what you'd like to about the reasonableness of printing out symbols
instead of strings, or about using readch instead of in-char or whatever.
Those are clearly substandard coding styles, but people sometimes write
substandard code and I think it's very important that we guarantee that 
Lisp READ/PRINT for simple cases like these preserve a certain amount of
semantic content especially when the user has gone to the trouble to type
his symbols with /'s or |...|'s and I don't see any way a PRINT-time decision
can do anything but risk screwing the user.

Anyone who likes to see code in a certain case can write an editor package
(or borrow mine) which upcases or downcases code respecting things vbars,
doublequotes, semicolons, slashes, etc. But I don't think common lisp should
be cluttered with any dwimish notions of automatic case conversion that work
only most of the time ...

∂30-Aug-82  0007	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 
Date: Monday, 30 August 1982, 02:59-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
To: Kent M. Pitman <KMP at MIT-MC>
Cc: Common-Lisp at SU-AI
In-reply-to: The message of 30 Aug 82 00:37-EDT from Kent M. Pitman <KMP at MIT-MC>

    Date: 30 August 1982 00:37-EDT
    From: Kent M. Pitman <KMP at MIT-MC>

    I think it's worthwhile to provide two fields of the documentation for the
    following reasons: 

    * Abstraction. A documentation string should be just what it sounds like: a
      string which is not intrinsically interesting to the machine -- just
      something which can be typed at an intelligent entity (eg, a human) to
      provide insight into something.

      It should, as such, have no structure which is not explicit in its
      representation. If there's something magic about the first line and the
      remaining lines, that distinction should be apparent in the
      representation.

There is some sense to this.  We could make it be allowed to be either a
string or a list of two strings, if we don't feel that a carriage return is
enough structure.

    * Simplicity. It's a useful case which documentation code might want to do 
      frequently. The user should not be bothered with string-hacking if all he 
      wants to do is get documentation. Might as well make the common case easy.
This is absurd.

    * Efficiency. Not so much a concern on a large address space machine, but
      still worth considering: Consing should not be required to access a
      documentation string. Experience with Emacs has shown that in certain
      space-critical situations, it's a win to be able to access documentation
      when the rest of the world has ceased to run because of lack of free
      space so you can find the function you need in order to correct the
      problem.
This is more absurd.


    -----
	... The pre-defined object types are DEFUN for a function, special form, or
	macro; DEFVAR for a global variable, parameter, or constant; DEFSTRUCT
	for a structure....
    -----
    I would rather see the object types relate to the intended use of the
    definition rather than the form used to create the use. eg, MACRO and DEFMACRO
    both create the same type of object; indeed, if you do one and then the
    other the documentation should overwrite one another just as a (MACRO ...)
    and (DEFMACRO ...) form would overwrite each other in LispM lisp. Similarly
    for variables: I would feel uncomfortable about giving something a DEFVAR
    type documentation if I had not DEFVAR'd it. Suppose I had DEFCONST'd it or
    just SETQ'd it. Isn't that good enough? I'd rather see the names be something
    like :VARIABLE, :MACRO, :FUNCTION, :STRUCTURE, :SPECIAL-FORM, etc.
    rather than the name of the typical form that would create the documentation 
    type as currently proposed.

You didn't read my message carefully enough.  All functions are of type
DEFUN, it doesn't matter what macro you defined them with.  Common Lisp
allows the same name in the space of functions to be both a special form
and a macro; but surely if they didn't have the same documentation there
would be hell to pay!

The types can't be keywords because that interferes with user
extensibility, since all keywords are in the same package.  In the Lisp
machine we chose the name of the most prominent defining form for no
really strong reason, just because it seems senseless to make up a whole
new set of names for this when there are already reasonably mnemonic
names in existence.  Otherwise, especially when you have users defining
their own types, you find you are using the same names as the defining
forms but with the DEF prefix taken off, and you have the problem of
trying to guess whether DEFUN maps into FUN or FUNCTION or UN.

∂30-Aug-82  0748	Masinter at PARC-MAXC 	Object type names
Date: 30-Aug-82  7:22:47 PDT (Monday)
From: Masinter at PARC-MAXC
Subject: Object type names
To: Common-Lisp at SU-AI

The Interlisp file package deals with objects via object type names FNS, VARS, MACROS, RECORDS, ... (I think there are about 15 built-in types.)

All of the functions in the file package which deal with object types (e.g.,
GETDEF, COMPAREDEFS, MOVEDEF, EDITDEF, etc.) will coerce non-standard synonyms,
e.g. FUNCTION, FUNCTIONS, VARIABLE, ...

The mechanism for doing the coersion, allowing users to add new synonyms, object
types, etc. is not very complex. 

I think it is much more user sensible to talk about FUNCTIONS than  DEFUN-objects
(or DEFINEQs, for that matter.) It did turn out to be more convenient to have FNS
and VARS rather than FUNCTIONS and VARIABLES, the brevity for common types as
important as the historical reasons.

∂30-Aug-82  0914	ROD  	LOOP and white pages.   
To:   common-lisp at SU-AI  

(reply to "BROOKS@MIT-OZ"@MIT-MC)

Just so Scott doesn't get the feeling that he is the only one who doesn't want
LOOP in the white pages, here are some reasons that I am against it.

I haven't looked at the complete new proposal, but in MOON's MACRO example:

(DEFUN MACROEXPAND (FORM &AUX CHANGED)
  "Keep expanding the form until it is not a macro-invocation"
  (LOOP (MULTIPLE-VALUE (FORM CHANGED) (MACROEXPAND-1 FORM))
	(IF (NOT CHANGED) (RETURN FORM))))

one sees a fundamentally new scoping system in action. In all other binding
mechanisms (e.g. LET, DO, MULTIPLE-VALUE-BIND, DEFUN and even PROG with init values)
the CAR of a form identifies the form type as one which will do binding which
will remain in effect only within that form, and at the same time says where
in the form one can find the variables and what they will be bound to.
In this example MULTIPLE-VALUE says where the variable names and the things
that they will be bound to are (syntactically), but the scope of those
bindings is determined by a context outside of the form with MULTIPLE-VALUE
as its CAR. On the other hand while the symbol LOOP determines the scope of
the bindings (in the example at least) it doesn't determine the syntactic location
of the variables and what they get bound to.

While JLK is right that it is possible to write horrible code with DO, I don't
agree that we have to jump in *now* with another mechanism, especially when that
mechanism introduces radically new (and I believe bad) scoping rules. Lets leave
LOOP in the yellow pages for now.

∂30-Aug-82  0905	Masinter at PARC-MAXC 	case-sensitivity: a modest proposal  
Date: 30-Aug-82  9:05:16 PDT (Monday)
From: Masinter at PARC-MAXC
Subject: case-sensitivity: a modest proposal
To: common-lisp at SU-AI

I propose the following solution to the current case dilemma:

a) Common Lisp is case sensitive: Foo is not eq to foo.

b) All symbols in packages admitted into the Common Lisp white- and yellow-
  pages are REQUIRED to be lower case. 

This has the following advantages:
a) everybody types and sees lower case
   (I guess this is an advantage, since most type-faces look better
    with all lower-case rather than all caps)
b) no confusion about what you have to remember if you are using
   shared packages: all of the symbols are lower case.

c) symbols used for documentation, parsing, file names in host operating
   system, etc. can be mixed case/upper case, etc.

d) users who like MixedCaseSymbolsToSeparateWordsInSymbols can do so
   and still have their print out the same way they were read in.

∂30-Aug-82  0910	Masinter at PARC-MAXC 	Re: Circular structure printing 
Date: 30-Aug-82  9:11:24 PDT (Monday)
From: Masinter at PARC-MAXC
Subject: Re: Circular structure printing
In-reply-to: dlw at SCRC-TENEX's message of Sunday, 29 August 1982, 11:33-EDT
To: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
cc: common-lisp at su-ai

The HPRINT package in Interlisp prints circular structures in the desired fashion. The algorithm assumes that output is going to a random-acess device (e.g., disk file); if the output file is not RANDACCESSP, then output is first sent to a scratch file and when printing is complete, it is sent to the final destination. HPRINT keeps a hash-table of objects-printed->byte position so that on second reference it can go back and put in the forward reference before the first reference.

I believe you already have the source, but in any case, it can be found in [parc-maxc]<lisp>hprint.

Larry

∂30-Aug-82  0913	Dave Dyer       <DDYER at USC-ISIB> 	note on portability    
Date: 30 Aug 1982 0904-PDT
From: Dave Dyer       <DDYER at USC-ISIB>
Subject: note on portability
To: common-lisp at SU-AI


 More than simple assertions of portability is necessary to make common
lisp common.  To paraphrase Larry Masinter, I don't believe in portability
in the absence of instances of porting.  As this applies to common lisp,
I don't believe common lisp code will really be portable unless substantial
amounts of code are actually shared among the implementations.  Such
shared code will allow minor quirks in the primitives to be discovered and
removed, and also assure that hairy higher level features are not subtly
different from one implementation to the next.

 The LOOPS implementation is an ideal candidate to be developed
and ported.  Symbolics will surely build and document it - and the
rest of the implementors can ignore it until the code drops from the sky.
-------

∂30-Aug-82  0957	Dave Dyer       <DDYER at USC-ISIB> 	Circular structure printing 
Date: 30 Aug 1982 0937-PDT
From: Dave Dyer       <DDYER at USC-ISIB>
Subject: Circular structure printing
To: common-lisp at SU-AI


 I believe the Interlisp "HORRIBLEPRINT" package works as you
describe.  Interlisp's solution to the "problem" of having to do
two passes is that it only uses horribleprint where the user
has declared it necessary.  The code is common to all Interlisp
implementations, and is public domain as far as I know.
-------

∂30-Aug-82  1032	Scott E. Fahlman <Fahlman at Cmu-20c> 	No PRINT-time case conversion switch please!  
Date: Monday, 30 August 1982  13:17-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Kent M. Pitman <KMP at MIT-MC>
Cc:   COMMON-LISP at SU-AI
Subject: No PRINT-time case conversion switch please!


KMP misunderstands the proposal for a switch to convert output to lower
case.  In the context of a Lisp that converts to upper case by default,
this switch would cause PRINT to do the following:

Figure out how the symbol would print normally, without the swtich being on.

If the symbol would print with |'s leave it alone.

Else, every character that is about to come out in upper case is
converted to lower-case and printed.  Lower-case chars, which would
normally be printed with slashes, are left untouched and the slash is
left in.

This is guaranteed to read in correctly with the upper-case-converting
reader.  If the user reads in such a file with conversion disabled, he
loses.

USER INPUT	OUTPUT (normal)		OUTPUT (with flag true)

FOO		FOO			foo
foo		FOO			foo
Foo		FOO			foo

|foo|		|foo|			|foo|
|X|		X			x
|x|		\x			\x

It is clear that, with or without the proposed switch, any printer has
to make some assumptions about what the reader is going to do.

-- Scott

∂30-Aug-82  1124	Scott E. Fahlman <Fahlman at Cmu-20c> 	No PRINT-time case conversion switch please!  
Date: Monday, 30 August 1982  13:17-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Kent M. Pitman <KMP at MIT-MC>
Cc:   COMMON-LISP at SU-AI
Subject: No PRINT-time case conversion switch please!


KMP misunderstands the proposal for a switch to convert output to lower
case.  In the context of a Lisp that converts to upper case by default,
this switch would cause PRINT to do the following:

Figure out how the symbol would print normally, without the swtich being on.

If the symbol would print with |'s leave it alone.

Else, every character that is about to come out in upper case is
converted to lower-case and printed.  Lower-case chars, which would
normally be printed with slashes, are left untouched and the slash is
left in.

This is guaranteed to read in correctly with the upper-case-converting
reader.  If the user reads in such a file with conversion disabled, he
loses.

USER INPUT	OUTPUT (normal)		OUTPUT (with flag true)

FOO		FOO			foo
foo		FOO			foo
Foo		FOO			foo

|foo|		|foo|			|foo|
|X|		X			x
|x|		\x			\x

It is clear that, with or without the proposed switch, any printer has
to make some assumptions about what the reader is going to do.

-- Scott

∂30-Aug-82  1234	Kent M. Pitman <KMP at MIT-MC> 	Access to documentation strings  
Date: 30 August 1982 13:43-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject:  Access to documentation strings
To: Fahlman at CMU-20C
cc: Common-Lisp at SU-AI

The use of one-line documentation strings is not to provide complete 
documentation, it is to allow APROPOS-style primitives so that the user
can make a good guess about which functions to ask for full documentation on.

    Date: Monday, 30 August 1982  13:24-EDT
    From: Scott E. Fahlman <Fahlman at Cmu-20c>

    We already have a long-form documentation (the manual entry) and a short
    pocket-guide sort of entry (the documentation string) for each function...

I question the importance of this point. I think it's neat that you have so
much documentation of primitive functions, but you should definitely not think
of the primitive system as the only thing that's going to use this sort of 
documentation facility. You can expect other major systems (ZWEI, for example)
or imbedded languages (FRL, BrandX, etc.), all of which will be increasing the
size of the namespace considerably. When a user is searching for a primitive
he knows is there somewhere, the short-description facility is tremendously
useful. 

∂30-Aug-82  1327	Scott E. Fahlman <Fahlman at Cmu-20c> 	Access to documentation strings
Date: Monday, 30 August 1982  13:24-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Kent M. Pitman <KMP at MIT-MC>
Cc:   Common-Lisp at SU-AI
Subject: Access to documentation strings


We already have a long-form documentation (the manual entry) and a short
pocket-guide sort of entry (the documentation string) for each function.
In the long run, the manual will be available on-line, I am sure.  We
also have (in Spice Lisp, anyway) an on-line way of accessing the
argument-list of a function.  What I question is whether there is any
real use for a super-short one-line documentation string.  Having
attempted to write several of these, I find that it is almost impossible
to say something meaningful and not misleading about most functions in one
line or one short sentence.  That is why I would like to flush this idea
and just go with short and long.  I'm not sure that the Emacs experience
is relevant here.

If we do go with both short and super-short descriptions, I agree that
they should be separate strings hidden in separate places.

-- Scott

∂30-Aug-82  1428	Alan Bawden <ALAN at MIT-MC> 	misinformation about LOOP
Date: 30 August 1982 17:19-EDT
From: Alan Bawden <ALAN at MIT-MC>
Subject:  misinformation about LOOP
To: BROOKS at MIT-MC
cc: common-lisp at SU-AI

    Date: 30 Aug 1982 0914-PDT
    From: Rod Brooks <ROD at SU-AI>

    I haven't looked at the complete new proposal, but in MOON's MACRO example:

    (DEFUN MACROEXPAND (FORM &AUX CHANGED)
      "Keep expanding the form until it is not a macro-invocation"
      (LOOP (MULTIPLE-VALUE (FORM CHANGED) (MACROEXPAND-1 FORM))
	    (IF (NOT CHANGED) (RETURN FORM))))

    one sees a fundamentally new scoping system in action. In all other binding
    mechanisms (e.g. LET, DO, MULTIPLE-VALUE-BIND, DEFUN and even PROG with
    init values) the CAR of a form identifies the form type as one which will
    do binding which will remain in effect only within that form, and at the
    same time says where in the form one can find the variables and what they
    will be bound to. In this example MULTIPLE-VALUE says where the variable
    names and the things that they will be bound to are (syntactically), but
    the scope of those bindings is determined by a context outside of the form
    with MULTIPLE-VALUE as its CAR. On the other hand while the symbol LOOP
    determines the scope of the bindings (in the example at least) it doesn't
    determine the syntactic location of the variables and what they get bound
    to.

This is confused.  Moon could just as easily have written:

(DEFUN MACROEXPAND (FORM &AUX CHANGED)
  "Keep expanding the form until it is not a macro-invocation"
  (DO ()
      (NIL)
    (MULTIPLE-VALUE (FORM CHANGED) (MACROEXPAND-1 FORM))
    (IF (NOT CHANGED) (RETURN FORM))))

There is no problem I can see with determining the scope of any bindings made
by LOOP.  In Moon's code LOOP made NO bindings.

∂30-Aug-82  1642	JonL at PARC-MAXC 	Re: byte specifiers  
Date: 30 Aug 1982 16:42 PDT
From: JonL at PARC-MAXC
Subject: Re: byte specifiers
In-reply-to: Guy.Steele's message of 23 August 1982 2328-EDT (Monday)
To: Guy.Steele at CMU-10A
cc: Earl A. Killian <EAK at MIT-MC>, common-lisp at SU-AI

True, an integral "byte specifier" is useful, but last fall CommonLisp had
in it the very useful functions LOAD-BYTE and DEPOSIT-BYTE, which
took separate arguments for the "pp" and "ss" parts.

These two functions seem to have disappeared from the current CommonLisp
manual, without any discussion.  What happened?  I tried to bring this
matter up at Pgh last week, but it seems to have gotten lost in the wash.

Anyway, the value of separating them (if EAK's arguments weren't enough)
is that frequently the "ss" part is constant, but the "pp" part varying.  Just
for the record, the VAX has instructions which are like LOAD-BYTE, rather
than the PDP10ish LDB.


∂30-Aug-82  1654	JonL at PARC-MAXC 	Re: a protest   
Date: 30 Aug 1982 16:54 PDT
From: JonL at PARC-MAXC
Subject: Re: a protest
In-reply-to: HEDRICK's message of 24 Aug 1982 1321-EDT
To: HEDRICK at RUTGERS (Mgr DEC-20s/Dir LCSR Comp Facility)
cc: common-lisp at SU-AI

I thought CLOSUREs got discussed, but I'm not sure under which
numbered item.  In particular, I thought we agreed upon having
CLOSUREs "capture" local variables (as well as special variables), 
and maybe we renamed this "locality" concept as "lexical".  But
I don't remember any decision about allowing non-local GOs --
what's the story?  (issue 68 isn't about non-local GO, since its
concerned with the lexical scope around a CATCH-ALL).


∂31-Aug-82  0756	Scott E. Fahlman <Fahlman at Cmu-20c> 	Masinter's proposal on case    
Date: Tuesday, 31 August 1982  10:56-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: Masinter's proposal on case


If we were to make Common Lisp case-sensitive, Masinter's proposal to
require every symbol in the white and yellow pages to be lower-case
would be essential to retaining everyone's sanity, but I oppose the
basic suggestion.  We might be able to defend the case-purity of the
white and yellow pages, but under this proposal there would still be a
lot of code written in mixed case, and users would still have the problem
of remembering the case of everything.  I see two cutlures developing
very quickly, one of which types in lower-case only and the other
capitalizing assorted words, as in ThisIsaReallyUglySymbol.  It still
looks like a recipe for chaos to me.

-- Scott

∂31-Aug-82  0812	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	2nd generation LOOP macro 
Date: Tuesday, 31 August 1982, 11:06-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: 2nd generation LOOP macro
To: Fahlman at Cmu-20c, BUG-LOOP at MIT-ML, Common-Lisp at SU-AI
In-reply-to: The message of 27 Aug 82 21:11-EDT from Scott E. Fahlman <Fahlman at Cmu-20c>

    Date: Friday, 27 August 1982  21:11-EDT
    From: Scott E. Fahlman <Fahlman at Cmu-20c>

    Well, it is OK to write a portable package that explicitly requires
    something from the yellow pages, so we could still use your portable
    code and include the LOOP package with it.

The main problem with this is that it introduces a new phenomenon.  "I
thought that I'd use Moon's nifty new code-walker to improve my hairy
language extension, but unfortunately it uses the
Moon/Burke/Bawden/whomever LOOP package and so when I load it into my
environment it smashes my own LOOP package, so I can't use it." This is
a fundamental problem with the whole idea of the yellow pages; I don't
think there is any solution, so we should just live with it in general.

I think that it is important that LOOP go into the white pages, so that
our language has some reasonable way to get the same power in iteration
that other languages have, but obviously we need a real solid proposal
before this can be discussed.  If it is necessary to leave LOOP in the
yellow pages for a while before it is adopted, that is unfortunate but
acceptable.  

However, I'd like it to be made clear that LOOP is being seriously
proposed for eventual inclusion in the white pages even if it is only
going into the yellow pages for now, so that people will be encouraged
to try it, suggest improvements, and use it instead of ignoring it and
writing their own equivalent but gratuitously incompatible LOOP macros.
(Conceptually-different and non-gratuitously incompatible
keyword-oriented iterators are perfectly OK, of course.)

∂31-Aug-82  0816	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>   
Date: Tuesday, 31 August 1982, 11:11-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
To: Kim.fateman at UCB-C70, common-lisp at su-ai
In-reply-to: The message of 29 Aug 82 21:17-EDT from Kim.fateman at Berkeley

    Date: 29 Aug 1982 18:17:22-PDT
    From: Kim.fateman at Berkeley

    It might be appropriate to mention the way that Franz now accomodates
    various loop packages (simultaneously):
    ...This has enabled us to provide "portability" in a useful fashion.

    I hope that common lisp supports portability at least as well.

Let us not play semantic games.  "Portability" does not refer to the
ability to simultaneously support incompatible packages in the same Lisp
environement.  "Portability" means that a program can be moved from one
C.L. implementation on one machine to another on another machine and
still behave the same way.  Portability is one of the main goals of
Common Lisp; ability to support many LOOP packages with conflicting
names is no more a goal than is the ability to support user-defined
functions named CAR.  There's nothing about portability that says that
we may not define LOOP to be the name of a special form.

∂31-Aug-82  0823	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	function specs  
Date: Tuesday, 31 August 1982, 11:21-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: function specs
To: Fahlman at Cmu-20c
Cc: Common-Lisp at SU-AI
In-reply-to: The message of 29 Aug 82 18:02-EDT from Scott E. Fahlman <Fahlman at Cmu-20c>

    Maybe all we disagree about is whether to call the location expression a
    name and whether to make up a whole new syntax for it, rather than using
    the SETF syntax.

That is exactly right.  A "function spec" is a Lisp object that
designates a cell that might hold a function object.  You suggest using
a retriever-form, instead of using the new syntax that we use.  Our new
syntax is no worse than the new syntax for Lisp data types provided by
Common Lisp; it's just another one of those kinds of thing.

The implication of your proposal is that we'd have to add a new function
to do the retrieval for any kind of funcion spec we put in.  For
example, there would have to be a function that took a function and a
number, and returned the Nth internal function of that function, so that
you could do (:INTERNAL FOO 2) (it could take a name, too, but that's
orthagonal).  I guess our feeling was that it was better to avoid
introducing a new function for each kind of function spec, and it was
better to just create a new object capable of representing general
function locations and have a set of functions (FDEFINITION and friends)
that interpret them.  But your suggestion has merit too.  I don't feel
strongly one way or the other right now; I'd like to hear other input
from people.

∂31-Aug-82  0841	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	case-sensitivity and portability    
Date: Tuesday, 31 August 1982, 11:37-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: case-sensitivity and portability
To: Fahlman at Cmu-20c, common-lisp at SU-AI
In-reply-to: The message of 29 Aug 82 13:07-EDT from Scott E. Fahlman <Fahlman at Cmu-20c>

I agree completely with your reasoning about why JKF's proposal does not
make it.  I feel the same way you do and have nothing to add.

Masinter's proposal is the only one I have ever heard that actually
meets all of the standard objections.  However, it is a strange
compromise; it tells the case-sensitive folks that it is OK for them to
use mixed-case with sensitivity, but that if they do so, their package
will never be accepted into the yellow pages (nor the white pages, of
course).  It might be an acceptable compromise.  The main problem from
the point of view of the case-insensitive people (us) is that we'd have
to convert every last drop of code to lower case.  Now, I have been
doing that in my own programming for the last few years because I like
the way it looks, but some of my friends who like the way upper case
looks might not be so happy; note that they are essentially required to
use only lower-case even in their own private code!

This proposal deserves serious consideration; I'd like to see more ideas
about it.  (Ideas, not flames, please...)

∂31-Aug-82  0906	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	LOOP and white pages.     
Date: Tuesday, 31 August 1982, 10:59-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: LOOP and white pages.   
To: brooks at MIT-OZ at MIT-MC, common-lisp at SU-AI
In-reply-to: The message of 30 Aug 82 12:14-EDT from Rod Brooks <ROD at SU-AI>

    Date: 30 Aug 1982 0914-PDT
    From: Rod Brooks <ROD at SU-AI>
    ...here are some reasons that I am against it.

    In this example MULTIPLE-VALUE says where the variable names and the things
    that they will be bound to are (syntactically), but the scope of those
    bindings is determined by a context outside of the form with MULTIPLE-VALUE
    as its CAR....

As Alan pointed out, this is completely confused.  It has nothing to do
with LOOP.  The LOOP doesn't bind any variables or define any scopes in
this example; it could be replaced by a DO () (()) and the same things
would happen.

    While JLK is right that it is possible to write horrible code with DO, I don't
    agree that we have to jump in *now* with another mechanism, especially when that
    mechanism introduces radically new (and I believe bad) scoping rules. Lets leave
    LOOP in the yellow pages for now.

JLK's point is not taht it is possible to write horrible code with DO.
JLK's point is that it is NOT possible to write CLEAN code with DO in
many simple and common cases.

LOOP does not introduce any radical new scoping rules.  Please be more
careful before spreading stories like this around; the situation is
confused enough as it is.  Variables bound in a LOOP body are bound for
the duration of the body just as in DO and PROG and LET.

∂31-Aug-82  1441	MOON at SCRC-TENEX 	Re: a protest  
Date: Tuesday, 31 August 1982  17:38-EDT
From: MOON at SCRC-TENEX
To: JonL at PARC-MAXC
Cc: common-lisp at SU-AI
Subject: Re: a protest

    Date: 30 Aug 1982 16:54 PDT
    From: JonL at PARC-MAXC
    Subject: Re: a protest
    In-reply-to: HEDRICK's message of 24 Aug 1982 1321-EDT
    To: HEDRICK at RUTGERS (Mgr DEC-20s/Dir LCSR Comp Facility)
    cc: common-lisp at SU-AI

    I thought CLOSUREs got discussed, but I'm not sure under which
    numbered item.  In particular, I thought we agreed upon having
    CLOSUREs "capture" local variables (as well as special variables), 
    and maybe we renamed this "locality" concept as "lexical".  

I think the word closure is being used for two things.  There is the
Lisp machine function CLOSURE, which closes over a named set of
special variables.  We seem to have agreed (through the mail) not to
put this into Common Lisp.

There is also the pseudo-mathematical concept of the closure of a function
over an environment.  I'm not sure but I think we have agreed to support
"full funarging" as part of the introduction of lexical scoping in
Common Lisp.  Thus (FUNCTION (LAMBDA ...)), or (FUNCTION FOO) where
FOO is defined with a LABELS, used as an argument passes a closure
(lower case) of that function as the argument.  The manual is silent
about the issue that this funarg is a different data type from the
function itself.


								But
    I don't remember any decision about allowing non-local GOs --
    what's the story?  (issue 68 isn't about non-local GO, since its
    concerned with the lexical scope around a CATCH-ALL).

This was agenda item #8.  We agreed that at least in principle there should
be no restrictions.  Item #49 (get rid of local scope, have only lexical
scope) is relevant, also.

Agenda item #68 is about PUSHNEW.

∂31-Aug-82  1517	MOON at SCRC-TENEX 	Agenda item 61 
Date: Tuesday, 31 August 1982  18:13-EDT
From: MOON at SCRC-TENEX
To: Common-Lisp at su-ai
Subject:Agenda item 61

Are we all agreed that the simple iterator proposed under the name CYCLE
will be renamed LOOP, even though the full LOOP will not be in the 1982
version of the "white pages"?

I suggest that the white pages contain a note that the meaning of a LOOP
expression with atoms in its body is explicitly undefined, and becomes
defined if you use the LOOP package in the yellow pages.  Note that only
lists are useful in the body of the proposed CYCLE special form.

I just don't want to have two names for the same thing.

∂31-Aug-82  1538	MOON at SCRC-TENEX 	LOAD-BYTE and DEPOSIT-BYTE    
Date: Tuesday, 31 August 1982  18:05-EDT
From: MOON at SCRC-TENEX
To: JonL at PARC-MAXC
Cc: common-lisp at SU-AI, Earl A. Killian <EAK at MIT-MC>, 
      Guy.Steele at CMU-10A
Subject: LOAD-BYTE and DEPOSIT-BYTE

I won't throw any bombs if we decide to put these in.  I vaguely remember
a meeting where we discussed compiler optimization such that
	(LDB (BYTE 1 I) W)
would be equivalent to
	(LOAD-BYTE 1 I W)
or whatever the arguments are.  This would make LOAD-BYTE fairly
superfluous.

∂31-Aug-82  1850	Masinter at PARC-MAXC 	Re: case-sensitivity and portability 
Date: 31-Aug-82 18:51:03 PDT (Tuesday)
From: Masinter at PARC-MAXC
Subject: Re: case-sensitivity and portability
In-reply-to: dlw at SCRC-TENEX's message of Tuesday, 31 August 1982, 11:37-EDT
To: common-lisp at SU-AI

I have on more than one occasion taken someone else's Interlisp program and (without very much pain) converted all of the MixedCaseIdentifiers to 
ALLUPPERCASE before including it in the Interlisp system (in which, although
mixed case is allowed, all standard functions are uppercase to avoid confusion.)

This has been acceptable. That is: "it tells the case-sensitive folks that 
it is OK for them to use mixed-case with sensitivity, but that if they do so, 
their package will have to be converted before it will be accepted into
CommonLisp."

Since it is often true that packages will have to be DEBUGGED before being
adopted, the changing of identifier names will pale in comparison....

Larry

∂31-Aug-82  1952	Scott E. Fahlman <Fahlman at Cmu-20c> 	LOAD-BYTE and DEPOSIT-BYTE
Date: Tuesday, 31 August 1982  22:52-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: LOAD-BYTE and DEPOSIT-BYTE


I agree with Moon on the LOAD-BYTE business.  I think it's silly to have
two kinds of byte-hacking functions around when the conversion from
one form to the other is usually so trivial.

-- Scott

∂31-Aug-82  2342	Earl A. Killian <EAK at MIT-MC> 	lambda 
Date: 1 September 1982 02:38-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject: lambda
To: common-lisp at SU-AI

The 29 July manual claims

1. (LAMBDA ...) is not evaluable.

and

2. '(LAMBDA ...) is a valid function.

On 1, I see no reason for (LAMBDA ...) not to evaluate to a function
(i.e. as if it had (FUNCTION ...) wrapped around it).  So why not
allow it?  Scheme pioneered this, and I think it was quite
aesthetic.

On 2, isn't allowing this inviting users to screw themselves now
that we've got lexical scoping?  E.g.
	(LET ((A ...)) (SORT L '(LAMBDA (X Y) (< X (- Y A)))))
won't get the locally bound A.

∂01-Sep-82  0046	Kent M. Pitman <KMP at MIT-MC> 	'(LAMBDA ...)
Date: 1 September 1982 03:42-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject: '(LAMBDA ...)
To: COMMON-LISP at SU-AI

I might also note that there are users who think it's acceptable to type
'(LAMBDA ...) because it's supported as a compatibility thing.

In the past few months I spent a lot of time looking over a certain 
large system which had been written for Maclisp and transported to the LispM 
and which ran far slower than was considered reasonable. In poking around,
I noticed that 20% of the time was being lost to calls to EVAL in what I
thought was to be a compiled system. I later found that this was because 
someone was putting '(LAMBDA ...) instead of #'(LAMBDA ...) and was therefore
not getting compiled code in some places that desperately needed it. If the
evaluator had just been firm and disallowed quoted lambdas as funcallable
things, it'd have caused a minor amount of grief getting the system up, 
but everyone would have been much happier in the long run.

The convenience, it seems to me, should be disallowed to wake people up
to the fact that a quoted lambda is going to win only in just enough
cases to give the illusion of safety and in general is just going to
cause any number of unexplained slowdowns or mysterious binding problems
as suggested by EAK.

Asking implementors to disallow (LAMBDA ...) as a function is a bit extreme
for common-lisp, but making it undefined in the common-lisp spec and 
encouraging implementors to disallow it seems a reasonable approach.

I also agree with EAK that
 (defmacro lambda (bvl &body body) `#'(lambda ,bvl ,@body))
is a winning abbreviation. 

∂01-Sep-82  0252	DLW at MIT-MC 	lambda    
Date: Wednesday, 1 September 1982  05:51-EDT
Sender: DLW at MIT-OZ
From: DLW at MIT-MC
To:   Earl A. Killian <EAK at MIT-MC>
Cc:   common-lisp at SU-AI
Subject: lambda

I presume the reason that (lambda () ...) does not evaluate is
because Common Lisp, unlike Scheme, has the dual notions of
"functional meaning" and "value meaning".  The former is used
for the first element of a non-special form list, and the
latter is used for the rest.  FUNCTION is provided to allow
"functional" meaning in "value" context.  This is all so that
we can have a LIST function and still let people have variables
named LIST.
-------

∂01-Sep-82  1259	Earl A. Killian            <Killian at MIT-MULTICS> 	lambda 
Date:     1 September 1982 1257-pdt
From:     Earl A. Killian            <Killian at MIT-MULTICS>
Subject:  lambda
To:       DLW at MIT-AI
Cc:       Common-Lisp at SAIL

Allowing (LAMBDA ...) to evaluate does not prevent you from having a
LIST funciton and a LIST variable.  What are you objecting to?  I never
suggested getting rid of FUNCITON, which is still necessary for symbols,
or the dual value cell concept.

Btw, I should have pointed out in my original message that the two
isssues are completely independent.  I.e. you can have A, B, or both.

Getting rid of '(LAMBDA ...) is removing an ugly blemish from the
language, but since no one is forcing me to write it, it doesn't really
effect me, though it might effect an implementor.

But I'd like to use (LAMBDA ...) in my own code, and I'd rather not have
to change it for it to go into the yellow pages...

∂02-Sep-82  0827	jkf at mit-vax at mit-xx 	Masinter's modest proposal   
Date: 2 Sep 1982 11:21:43-EDT
From: jkf at mit-vax at mit-xx
To: common-lisp@su-ai
Subject: Masinter's modest proposal

  Due to hardware problems at Berkeley, we are days behind in receiving
arpanet mail.  Therefore some of these remarks may be dated:

  I fully support Masinter's proposal and I am a little surprised that others 
haven't criticized it.  As I see it, his proposal differs from mine in two
ways:
1) He wants the language to be case-sensitive always whereas I wanted a switch
   to allow it to be insensitive too.  The reason I suggested a switch
   was that I thought there were people out there who like to vary the
   capitalization of symbols: eg (DO ((x (CAR y) (CDR x))) () (PRINT x)).
   If people don't really care to have a case-insensitive mode, that is 
   fine with me.

2) He proposes that all public symbols be lower case.  This is really a style
   question and I favor such a proposal.  I think that it is important to
   be consistent in a language like lisp where there are so many symbols
   to remember.


					- John Foderaro

∂02-Sep-82  0919	Richard E. Zippel <RZ at MIT-MC> 	case-sensitivity: a modest proposal 
Date: Thursday, 2 September 1982, 12:07-EDT
From: Richard E. Zippel <RZ at MIT-MC>
Subject: case-sensitivity: a modest proposal
To: common-lisp at SU-AI

Except for the moderate pain in converting a bunch of code to lower
case, I think Masinter's proposal is pretty good.   

∂02-Sep-82  1033	JonL at PARC-MAXC 	Re: SETF and friends [and the "right" name problem]
Date: 2 Sep 1982 10:33 PDT
From: JonL at PARC-MAXC
Subject: Re: SETF and friends [and the "right" name problem]
In-reply-to: RWK's message of 25 August 1982 04:41-EDT
To: Robert W. Kerns <RWK at MIT-MC>
cc: common-lisp at SU-AI

Apologies for replying so late to this one -- have been travelling for a week
after AAAI, and *moving to a new house* -- but I want to add support to
your comments.

Two issues seem to be paramount here:

1) I too would not like to see this change, specifically because it would
   incompatibly destroy the name for the time-honored SET function, and
   this surely falls into the category of "gratuitous" incompatibilities which
   CommonLisp promised not to do [I don't particularly like the notion of
   "fixing up" oddball names, such as HAULONG, but at least in that one
   case the number of users who've ever used HAULONG is probably a drop
   in the bucket compared to those who've ever used SET].

2) It must be an inevitable consequence of standardization in a large community
   that undue proportions of time are spent arguing over the "right" name for
   some functionality -- according to reports, this happened in the PASCAL
   world, so at least in one dimension Lisp is beginning to look like Pascal.

   "Right" apparently means "English-based and functionally descriptive", and
   so often one man's mnemonic is another man's anathema.   I think it must be
   conceded that for frequently used primitive operators, a short name, even if
   nonsensical, is to be preferred to a "right" one.  E.g.,  CONS is better than
   ALLOCATE-NEW-LIST-CELL.  

   Couldn't we resist the urge to rationalize every name?


∂02-Sep-82  1146	JonL at PARC-MAXC 	Re: a miscellany of your comments   
Date: 2 Sep 1982 11:46 PDT
From: JonL at PARC-MAXC
Subject: Re: a miscellany of your comments
In-reply-to: Killian's message of 25 August 1982 1452-pdt
To: Earl A. Killian <Killian at MIT-MULTICS>
cc: Fahlman at CMUc, Common-Lisp at SU-AI

Re Fixing of names now:
I certainly hope that Fahlman&Co can proceed without another meeting.
Even after initial stabilization, there will  likely be need for regular meetings
(say, after each Lisp Conference?)

Re case sensitivity:
As long as we agree in principle that the READer be user-tailorable for
case sensitivity, then it may not be necessary to specify just how such change
is effected;  I'd prefer the MacLisp way wherein each character can be
individually "translated", but that generality is seldom used.   On the other
hand, it would seem preferable to have the switch for case-sensitivity in
the ReadTable rather than somewhere else.  InterLisp puts it in the "terminal
table", so that conversion takes place as it were within the input stream
for the connected terminal;  the two lossages with this treatement are that
  1) translation takes place for non-READ operations too [such as INCH], 
  2) no translation takes place for READing from files.

Re InterLisp status on case sensivity:
You may be under a misconception in the comment about InterLisp:
    "Doing this would let you write code as in
         (SETQ MultiWordVarName NIL)
     as many Interlisp users do all the time (though possible in Maclisp, 
     it never caught on). "
InterLisp does not standardize to uppercase as MacLisp does, so the name
MultiWordVarName remains mixed-case.  As it happens, some of the
CMU MacLisp users have been using mixed-case names in their files
for some time (but of course they are upper-cased upon read-in).




∂02-Sep-82  1230	JonL at PARC-MAXC 	Re: CHECK-ARG-TYPE [and CHECK-SUBSEQUENCE]    
Date: 2 Sep 1982 12:30 PDT
From: JonL at PARC-MAXC
Subject: Re: CHECK-ARG-TYPE [and CHECK-SUBSEQUENCE]
In-reply-to: Moon at SCRC-TENEX's message of Thursday, 26 August 1982,
 03:04-EDT
To: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
cc: Common-Lisp at SU-AI

PDP10 MacLisp and VAX/NIL have had the name CHECK-TYPE for several 
years for essentially this functionality (unless someone has recently renamed
it).   Since it is used to certify the type of any variable's value,  it did not
include the "-ARG" part.  The motivation was to have a "checker" which was
more succinct than CHECK-ARGS, but which would generally open-code the
type test (and hence introduce no delay to the non-error case).  

I rather prefer the semantics you suggested, namely that the second argument 
to CHECK-TYPE be a type name (given the CommonLisp treatment of type
hierarchy).  At some level, I'd think a "promise" of fast type checking should
be guaranteed (in compiled code) so that persons will prefer to use this
standardized facililty;  without some indication of performance, one would
be tempted to write his own in order not to slow down the common case.


If the general sequence functions continue to thrive in CommonLisp, I'd
like to suggest that the corresponding CHECK-SUBSEQUENCE macro (or
whatever renaming of it should occur) be included in CommonLisp.  

  CHECK-SUBSEQUENCE (<var> <start-index> <count>) &optional <typename>  

provides a way to certify that <var> holds a sequence datum of the type
<typename>, or of any suitable sequence type (e.g., LIST, or STRING or 
VECTOR etc) if <typename> is null; and that the indicated subsequence
in it is within the size limits.
 

∂02-Sep-82  1246	JonL at PARC-MAXC 	Re: Access to documentation strings 
Date: 2 Sep 1982 12:46 PDT
From: JonL at PARC-MAXC
Subject: Re: Access to documentation strings
In-reply-to: Moon at SCRC-TENEX's message of Thursday, 26 August 1982,
 03:25-EDT
To: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
cc: Common-Lisp at SU-AI

I think you've got your hand on something bigger than merely documentation
strings; the question of "object types", or "definition type" if you will, 
parallels the InterLisp "file package" facility.  Despite the name "file pkg", 
this facililty is really about a coordinated database for user code ("user"
can include the system, since most of it is written in Lisp now).  I wouldn't
want to see progress on documentation held up at all, but it might be wise
to consider DOCUMENTATION in the light of such a coordinated facility.

File Pkg allows user-extensions to the "types" in a uniform way;  it interfaces
not only to file manipulation commands, but also with MasterScope comands.
Only recently (sadly) did InterLisp correctly separate out the multiple
definitions, e.g. some name being both a Global variable (a VARS type) and
also a function name (a FNS type), but this is a must. 

On the other hand, I'm not sure I see any value to the generalization of names
from symbols to any lisp object -- can you provide any motivation for this?


∂02-Sep-82  1325	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	string-out 
Date: Thursday, 2 September 1982, 14:28-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: string-out
To: common-lisp at su-ai

The STRING-OUT and LINE-OUT functions of the Swiss Cheese manual are
mysteriously missing from the Colander manual.  Is this intentional?  I
hope not.  These (particularly the former) are useful functions.

∂02-Sep-82  1331	JonL at PARC-MAXC 	Re: 2nd generation LOOP macro  
Date: 2 Sep 1982 13:32 PDT
From: JonL at PARC-MAXC
Subject: Re: 2nd generation LOOP macro
In-reply-to: Fahlman's message of Thursday, 26 August 1982  20:43-EDT
To: Scott E. Fahlman <Fahlman at Cmu-20c>
cc: BUG-LOOP at MIT-ML, Common-Lisp at SU-AI

Let me proffer a reason why LOOP should be even *white* pages.
Despite the best efforts of some functional programming people (especially
PROLOG?) iterative constructs won't go away; e.g. too often the translation of
a somewhat simple loop into functional equations leads to indecipherable
code.  Thus it would be well to standardize on some loop facility, if in
fact a standard can be found.

InterLisp's iterative statement facility *does not* share all the crocks of
CLISP (and was no doubt put there rather than being a macro due to
the primitive treatment of macros).  Furthermore, it's years of existence
as a standard within InterLisp, and its easy extensibility, speak well for it.
Both the past and current LOOP proposals are very very much like the
InterLisp iterative constructs, and should be viewed as having years of
support behind their basic ideas.

Another winning idea from the InterLisp iterative statement is that
the prettyprinter treats them specially, and tries to do a "good" job of
formatting so that the constituent parts stand out 2-dimensionally.  A real
defect of DO loops is that almost all minor exceptions to the simple case
have to appear in the code in some place that obscures their nature
(e.g., in the code body, or actually *before* the DO loop, or in the return
clause);  I realize that doing a "good" job will cause months of seemingly
endless discussion, but fruitage of this idea has got to be worth the effort.



∂02-Sep-82  1343	FEINBERG at CMU-20C 	Loop vs Do    
Date: 2 September 1982  16:43-EDT (Thursday)
From: FEINBERG at CMU-20C
To:   Common-Lisp at SU-AI
Subject: Loop vs Do

	I have heard how terrible DO is, and how winning LOOP is.
Could some kind person supply examples of *typical* loops using both
DO and LOOP?

∂02-Sep-82  1348	MOON at SCRC-TENEX 	Re: CHECK-ARG-TYPE [and CHECK-SUBSEQUENCE]   
Date: Thursday, 2 September 1982  16:43-EDT
From: MOON at SCRC-TENEX
To: JonL at PARC-MAXC
Cc: Common-Lisp at SU-AI
Subject: Re: CHECK-ARG-TYPE [and CHECK-SUBSEQUENCE]

I don't care whether CHECK-ARG-TYPE is called that or CHECK-TYPE, as long
as it exists.  CHECK-SUBSEQUENCE seems to be a good idea.

∂02-Sep-82  1349	MOON at SCRC-TENEX 	Re: Access to documentation strings
Date: Thursday, 2 September 1982  16:40-EDT
From: MOON at SCRC-TENEX
To: JonL at PARC-MAXC
Cc: Common-Lisp at SU-AI
Subject: Re: Access to documentation strings

    Date: 2 Sep 1982 12:46 PDT
    From: JonL at PARC-MAXC

    On the other hand, I'm not sure I see any value to the generalization of names
    from symbols to any lisp object -- can you provide any motivation for this?

The same reasons that you need it for function specs.

∂02-Sep-82  1409	MOON at SCRC-TENEX 	case-sensitivity: a modest proposal
Date: Thursday, 2 September 1982  17:01-EDT
From: MOON at SCRC-TENEX
to: common-lisp at SU-AI
Subject: case-sensitivity: a modest proposal

I don't think this proposal is modest at all.  It consists of ramming a
particular model down everyone's throat, which is no better than the status
quo (ramming a different particular model down everyone's throat).  I at least
would find case sensitivity by default to be totally unacceptable.

∂02-Sep-82  1428	MOON at SCRC-TENEX 	Loop vs Do
Date: Thursday, 2 September 1982  17:23-EDT
From: MOON at SCRC-TENEX
To: FEINBERG at CMU-20C
Cc: Common-Lisp at SU-AI
Subject: Loop vs Do

The issue is not that DO is terrible, but that DO only implements a small
subset of what LOOP does.  Specifically, DO only allows you to step variables
in parallel (sometimes you need the new value of one variable to depend on another,
and sometimes you don't), does not come with pre-packaged iterations through
various data structures, does not come with pre-packaged ways to create
return values in various ways, and does not allow you to control the order
of operations (variable-stepping vs end-test).

An example that works naturally in DO will not tell you anything about why
you need LOOP.  The problem is that writing many more complex iterations
with DO requires filling it up with SETQs and RETURNs (even GOs on rare
occasions).

∂02-Sep-82  1443	ucbvax:<Kim:jkf> (John Foderaro) 	Re: case-sensitivity: a modest proposal  
Date: 2-Sep-82 14:40:46-PDT (Thu)
From: ucbvax:<Kim:jkf> (John Foderaro)
Subject: Re: case-sensitivity: a modest proposal
Message-Id: <60852.6065.Kim@Berkeley>
Received: from UCBKIM by UCBVAX (3.177 [8/27/82]) id a03637; 2-Sep-82 14:41:10-PDT (Thu)
Via: ucbkim.EtherNet (V3.147 [7/22/82]); 2-Sep-82 14:40:51-PDT (Thu)
To: MOON@SCRC-TENEX
Cc: common-lisp@su-ai
In-Reply-To: Your message of Thursday, 2 September 1982  17:01-EDT

  I disagree with your remark that something is being rammed down everyone's
throat.  On the contrary, I think that all messages on the subject (with the
exception of yours) have been cautious and well thought out and I believe
that we are gradually approaching something most people can live with.




∂02-Sep-82  1525	JonL at PARC-MAXC 	Re: Circular structure printing
Date: 2 Sep 1982 15:25 PDT
From: JonL at PARC-MAXC
Subject: Re: Circular structure printing
In-reply-to: dlw at SCRC-TENEX's message of Sunday, 29 August 1982,
 11:33-EDT
To: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
cc: common-lisp at su-ai

InterLisp has had HPRINT for some time, which among other things prints out
circular structure in a "readable" fashion.  Having a separate function for this
purpose, rather than a Global variable flag which affects PRINT, may be a better
route as long as the PRIN1/PRINC dichotomy remains.  Unfortunately, this means
that every place where the PRIN1/PRINC dichotomy appears will not have to
become a PRIN1/PRINC/PRINH trichotomy (e.g., such as in EXPLODE and
PRIN?-TO-STRING). 

Efficiency in HPRINT is obtained by looking for circularities in a hash table,
and either 1) for randomly-accessible files doing the equivalent of a
FILE-POSITION to go back and insert a macro character in front of the
"first" occurrence, or 2) otherwise just printing to a "temporary, in-core"
file and then unloading the temproary file to the real output file.   This
code is, I think, in the public domain, so you could look at it if you still
want to;  printing to a "temporary" file is of course equivalent to SEF's 
suggestion to print first to a string.   [by the bye, HPRINT stands for 
"Horrible PRINT" since it handles all the horrible cases].

LISP/370 had a printer which did circularities right.   It was my subjective,
non-documented, feeling that there was no discernible time loss in this code; 
but then again it used 370 machine language and, depended upon having an
alternate "heap" to use as a hash table.   Might be nice to know what some
of the purely-Lisp written printers cost in time.

Beau Sheil noted an interesting comment about the reference-count GC
scheme of InterLisp-D:  since it's primarly structure-to-structure pointers
that are reference-counted (not local variables or stack slots), then a quick,
generally-useful, and fail-safe test for non-circularity is merely a bittest from
the GC table.  This is not how HPRINT is implemented, since it runs on the
PDP10 too, but is an interesting observation about the effects of GC strategy.


∂02-Sep-82  1815	JonL at PARC-MAXC 	Re: LOAD-BYTE and DEPOSIT-BYTE 
Date: 2 Sep 1982 18:13 PDT
From: JonL at PARC-MAXC
Subject: Re: LOAD-BYTE and DEPOSIT-BYTE
In-reply-to: MOON's message of Tuesday, 31 August 1982  18:05-EDT
To: MOON at SCRC-TENEX
cc: JonL, common-lisp at SU-AI, Earl A. Killian <EAK at MIT-MC>,  Guy.Steele
 at CMU-10A

As per EAK's comments, some feel that LOAD-BYTE is preferable to LDB.
But in either case, I'd concurr that the combination of both is "redundant";
I wouldn't concur that one or the other is "superfluous".


∂02-Sep-82  1809	JonL at PARC-MAXC 	Re: macro expansion  
Date: 2 Sep 1982 18:09 PDT
From: JonL at PARC-MAXC
Subject: Re: macro expansion
In-reply-to: Moon at SCRC-TENEX's message of Sunday, 29 August 1982,
 21:26-EDT
To: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
cc: Common-Lisp at SU-AI

I think Scott may be mistaken when he says that Chapter 8 of the Colander
edition is the MacLisp style -- it's more the LispM style from some months
ago.  I and RWK introduced multiple-value returning MACROEXPAND into
MacLisp/NIL some years ago, and shortly thereafter added in an expander
hook;  I think those changes were documented in the usual LISP.RECENT
messages.   Since then, the NIL version has been supplanted by yet a third
(incompatible) scheme, but this brings up a couple of questions on your 
proposal as of this dateline:

  1) Why not have MACROEXPAND return the second value, as obtained
    from MACROEXPAND-1  ?  A typical usage merely wants to "do all and 
    any expansions" and also be informed as to whether or not anything really
    happened.   It might even be worthwhile for this second return value, when
    non-null, to distinguish between the case of having actually run the
    expansion function and having found the expansion in some "memoization"
    facililty.    (the file [MC]NILCOM;DEFMAX > shows the "...-1" function 
    spelled MACROEXPAND-1*M, but only to retail losing compatibility with 
    the Lispm on the name MACROEXPAND-1). 

  2) We saw the need for a function called FIND-MACRO-DEFINITION,
    which more-or-less fulfills the purpose of the "blank" in your
    definition labelled "---get expander function---";  thus there is one
    place responsible for things like:
      2a) autoloading if the macro is not resident but does have an 
        AUTOLOAD property, or 
      2b) looking on the the alist found in MACROLIST so that one may 
        lambda-bind a macro definition without disturbing the function
        definition cell (nor the propertylist).

  3) although MacLisp/NIL didn't call it a general macro-expander-hook 
    the variable MACRO-EXPANSION-USE supplied just that facility
    (in coordination with DEFMACRO); also, it "distributed" the fetching of
    memoized expansions on a per-macro basis, so that there could be some 
    variablilty;  after all, a uniform *default* is fine, but ocasionally there is 
    a macro which wants to say "don't memoize me at all".   I'd say that there
    is a lacuna in your proposal in that you don't exhibit code for the case of   
    traditional memoizing by hashtable -- that will show how the macroexpansion
    function is selectively called depending on the hashtable entry.  

  4) we often felt the need for an additional argument to the expansion
    function which tells *why* this macro call is being expanded -- e.g.,
    EVAL, or COMPILE, or  FOR-VALUE, or FOR-EFFECTS etc.  I can't
    characterise all such cases, but GSB, KMP, and RWK may have some
    good inputs here.


Point 3 brings up another possibililty -- maybe your current definition of
*MACROEXPAND-HOOK* is an over-generalization.  That is, why not
have two hooks, one for testing whether or not a "memoization" is present
for the macro call in question (and returning it if present), and another hook
of three arguments namely:
      i) a pointer to the original cell (macro call),
     ii) the already-computed expansion form, and
    iii) a symbol to indicate the reason why the macro call was "expanded"
I realise that iii) will require more discussion, so maybe now is not the time
to put it into CommonLisp; also the memoization hook would have to return
a second value to indicate whether a memo was present.

In any case, I'd rather see the default value for these macro hooks be NIL,
so that one didn't have to understand the complexities of expansions and
memoizations just to get the default case (that is, unless someone plans to
allow NIL as a valid function name and . . . )


     

∂02-Sep-82  1955	Kim.fateman at Berkeley 	dlw's portability semantics   
Date: 2 Sep 1982 19:50:40-PDT
From: Kim.fateman at Berkeley
To: common-lisp@su-ai
Subject: dlw's portability semantics

Of course portability has many dimensions.  I thought that CL
was supposed to refrain from gratuitous incompatibilities with
maclisp, interlisp, zetalisp, ... .  The purpose of this is presumably
to allow some previously working code to be moved to a CL system.

I would think that (regardless of the definition of LOOP in CL),
an interlisp LOOP  (or FOR, or whatever...)package would be useful.
Did you take umbrage at this notion, dlw?

∂02-Sep-82  2027	ucbvax:<Kim:jkf> (John Foderaro) 	scott's message about case sensitivity   
Date: 2-Sep-82 20:21:30-PDT (Thu)
From: ucbvax:<Kim:jkf> (John Foderaro)
Subject: scott's message about case sensitivity
Message-Id: <60852.14309.Kim@Berkeley>
Received: from UCBKIM by UCBVAX (3.177 [8/27/82]) id a10980; 2-Sep-82 20:21:49-PDT (Thu)
Via: ucbkim.EtherNet (V3.147 [7/22/82]); 2-Sep-82 20:21:31-PDT (Thu)
To: common-lisp@su-ai
In-Reply-To: Your message of Tuesday, 31 August 1982  10:56-EDT


    From: Scott E. Fahlman <Fahlman at Cmu-20c>
    I see two cutlures developing
    very quickly, one of which types in lower-case only and the other
    capitalizing assorted words, as in ThisIsaReallyUglySymbol.  It still
    looks like a recipe for chaos to me.
    
But such recipes already exist.  What if I decided to use the underscore
instead of the hyphen in compound word symbols?  What if I decide to
use an escaped space?
	load-byte    load←byte    load\ byte

The addition of one more convention, LoadByte, is not going to make much of
a difference.  If a person writes a package for general consumption he
should follow the conventions, if he chooses not to then he can break the
convention in a number of ways.





∂02-Sep-82  2211	Kent M. Pitman <KMP at MIT-MC> 	It's not just "LOOP vs DO"...    
Date: 3 September 1982 00:46-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject:  It's not just "LOOP vs DO"...
To: COMMON-LISP at SU-AI

I might point out that Dick Waters (DICK@ML) is working on some macros
for iteration which looks like it may have much the same expressive
power as LOOP but in a far more natural notation. There is a small
community of users now experimenting with his package (called letS) and
he's working on a paper about it. It is not quite ready even for release
yet and I certainly wouldn't propose it be considered for a standard
anytime soon, but it does show a lot of promise ... On the other hand, I
think there's not a clear concensus that LOOP is the right thing. I'll
agree with LOOP people that more abstraction on LOOPs are needed than
just DO but I'm not sure that LOOP is the right answer (I'd certainly
never use it) ...  and I'd like to avoid people standardizing on
anything so controversial as LOOP while experimentation is still ongoing
with things that may prove as good. So in my mind, LOOP should
definitely not be in the white pages at this time and more importantly I
think people should keep their eyes open to other alternatives. It's not
like DO and LOOP are the only directions that things can go. Dave
Chapman (ZVONA@MC) had a very interesting macro called DO& which was yet
another alternative... I'm sure there are others.  From the point of
view of a standard, I think it's most reasonable to pick accepted
technology for the white pages and there I think DO and DO* are simple,
powerful (computationally adequate for writing any kind of loop), and
perhaps most importantly to the standard, have very well-understood and
well-accepted semantics.
-kmp

∂02-Sep-82  2300	Kent M. Pitman <KMP at MIT-MC>
Date: 3 September 1982 01:52-EDT
From: Kent M. Pitman <KMP at MIT-MC>
To: common-lisp at SU-AI

I agree with Moon re case-sensitivity. I think any proposal involving case
sensitivity as a default is a very bad idea.

I have avoided sending mail to this list which enumerates by particular 
feelings on case because I imagine everyone has a lot to say about it and
I've tried not to contribute excessively without having something truly
novel to say. But since people are starting to draw conclusions by the lack
of mail opposing the idea of case-sensitivity, let me say that this lack
of mail doesn't mean that people don't object, it just means that no vote
has been taken (nor, as someone pointed out, is a vote necessarily the way
to decide such an issue) so I think people are just mostly listening to what
others have to say.

As long as you've got this message anyway, I might as well put out my views
so they'll be on record and I won't have to send another later...

* I think case sensitivity has no place in a language (formal or otherwise).
  WORDS are Words are words. People remember the auditory representation of
  words and since case is not pronounced, it is hard to remember. Languages
  should be designed such that you can comfortably talk in or about them 
  and having to say "Oh, you need the function capital KILL" or say 
  send mail to me as "Capital P Little I-T-M-A-N at Multics" is really 
  awkward.

* Any case sensitivity is prone to ideosyncracy. If I happen to *like* 
  upper case or even just uniform case (ThisDrivesMeUpAWall!), it's nice
  that I can write my code one way and you another and they'll still talk
  back and forth. Lack of case sensitivity gives a shield from peoples'
  odd casing styles which actually allows people more flexibility in their
  use of case, not less.

I spent my first few years (what I would have expected to be my "formative
years") on case sensitive systems (Multics, Unix) and was truly happy to 
find Maclisp's readtime character translation... Basically, it lets me write
code in whatever case I like and since I type to an editor and just load
code and run it and most anything that types out had "..." around it anyway,
so I get nice mixed case output.

Indeed, I wonder often if part of the push from the Interlisp crowd for mixed
case does not revolve around their heavier use of an in-core editting
which causes them to have to worry about "losing" the case information
in their source code where in the case of Maclisp/LispM since functions 
are almost always defined in an external editor (in the LispM case, a
well-integrated external editor) and the actual source is never touched
by anything which would propose to change its case...

∂03-Sep-82  0210	David A. Moon <Moon at SCRC-POINTER at MIT-MC> 	case-sensitivity: an immodest proposal    
Date: Friday, 3 September 1982, 05:03-EDT
From: David A. Moon <Moon at SCRC-POINTER at MIT-MC>
Subject: case-sensitivity: an immodest proposal
To: Common-Lisp at SU-AI

I guess I need to amplify on my previous message, even at the risk of
discussing this to death, since my previous brief flame left too much
to the imagination.

The case behavior in the current Common Lisp manual forces case-insensitivity
on everyone, whether they like it or not.  There is a possibility of a standard
mode that you can turn on to get case-sensitivity, with the details not
fully specified.  However, this is a fraud, because you aren't allowed to
use it in portable programs.

Masinter's "modest" proposal instead forces case-sensitivity on everyone,
whether they like it or not.  There is the possibility of using two cases,
however this is a fraud, because you are only allowed to use one case in
portable programs.

So what we end up with is a choice between being allowed to use either case,
but not having the cases distinguished, or only being allowed to use one
case (lower), in portable programs.  It seems to me that all the modest
proposal accomplishes (for portable programs) is not allowing them to
be written in upper case, hardly an improvement.

∂03-Sep-82  0827	HEDRICK at RUTGERS (Mgr DEC-20s/Dir LCSR Comp Facility) 	administrative request 
Date:  3 Sep 1982 1124-EDT
From: HEDRICK at RUTGERS (Mgr DEC-20s/Dir LCSR Comp Facility)
Subject: administrative request
To: common-lisp at SU-AI

Who do we ask to get people added to this list?  We have asked RPG,
but that didn't seem to have any effect.  We need to add JOSH and
FISCHER at Rutgers.  I have been forwarding things to them, but I
am about to go away for a week, so unless they can get added, they
will effectively be cut off during that time.
-------

∂03-Sep-82  1012	ucbvax:<Kim:jkf> (John Foderaro) 	cases, re: kmp's and moon's mail    
Date: 3-Sep-82 09:08:30-PDT (Fri)
From: ucbvax:<Kim:jkf> (John Foderaro)
Subject: cases, re: kmp's and moon's mail
Message-Id: <60852.23814.Kim@Berkeley>
Received: from UCBKIM by UCBVAX (3.177 [8/27/82]) id a28737; 3-Sep-82 09:08:51-PDT (Fri)
Via: ucbkim.EtherNet (V3.147 [7/22/82]); 3-Sep-82 09:08:32-PDT (Fri)
To: common-lisp@su-ai

    Let me once again ask that we not let our personal feelings about the
subject determine how Common Lisp should treat cases.

    Kent says that he has trouble remembering cases in words.  I can accept
that but Kent should not generalize and say that this is try of all people.
I know plenty of people who have no trouble remembering cases and can even
converse in a case-sensitive domain without any strain.  I haven't had a
chance to talk to most of you in person but I guess that you pronounce
'load-byte' as "load byte", not as "load hyphen byte".   How would you
pronounce 'loadbyte'?   The points are these:
 1) don't shackle others with your limitations
 2) The ability to imply a hyphen by just saying "load byte" is an example
    of how conventions are used to map a sequence of syllables into 
    a string of characters.  In another context the same people could use
    a convention whereby "load byte" meant "LoadByte".


Re:
    From: David A. Moon <Moon at SCRC-POINTER at MIT-MC>

    Masinter's "modest" proposal instead forces case-sensitivity on everyone,
    whether they like it or not.  There is the possibility of using two cases,
    however this is a fraud, because you are only allowed to use one case in
    portable programs.
    
	
I am not sure whether Moon's fingers slipped or if he really thinks that
Masinter's proposal refers to 'portable' programs.  In fact, Masinter's
proposal says that all 'public' programs must be in lower case:

   From: Masinter at PARC-MAXC
   b) All symbols in packages admitted into the Common Lisp white- and yellow-
      pages are REQUIRED to be lower case. 

Everyone wants their programs to be portable, so the set of portable
programs is (almost) equivalent to the set of all programs written in Common
Lisp.   However the set of all 'public' programs, that is those that are
documented in the white and yellow pages, will be a small subset of the set
of Common Lisp programs.   While Masinter's proposal requires the 'public'
programs to follow the convention of all lower case, it says nothing about
what my private programs have to look like.    However, I write a
piece of code which is useful enough to be added to the public code, then
that code will be translated to lower case before being added.


    I think that most people understand the arguments on both sides now and
I would like to see this resolved one way or the other.   Could the
implementors get together and come up with a policy decision?




∂03-Sep-82  1020	Scott E. Fahlman <Fahlman at Cmu-20c> 	cases, re: kmp's and moon's mail    
Date: Friday, 3 September 1982  13:20-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   ucbvax:<Kim:jkf at CMU-20C
Cc:   common-lisp at SU-AI
Subject: cases, re: kmp's and moon's mail


John,

Before we try to reach a final decision, did anything ever come of your
polling of the Franz Lisp mailing list?  I would be most interested in
the results of this poll (or lack of results, which I would consider
very significant) before deciding for sure where I stand on this issue.
Right now, I still favor the case-insensitive status quo over the
"switch" proposal or the "modest" proposal, but the input from the unix
people could make a difference, especially for the Vax/Unix
implementation we are doing.

-- Scott

∂03-Sep-82  1452	ucbvax:<Kim:jkf> (John Foderaro) 	Re: cases, re: kmp's and moon's mail
Date: 3-Sep-82 10:57:24-PDT (Fri)
From: ucbvax:<Kim:jkf> (John Foderaro)
Subject: Re: cases, re: kmp's and moon's mail
Message-Id: <60852.25601.Kim@Berkeley>
Received: from UCBKIM by UCBVAX (3.177 [8/27/82]) id a01465; 3-Sep-82 10:57:45-PDT (Fri)
Via: ucbkim.EtherNet (V3.147 [7/22/82]); 3-Sep-82 10:57:25-PDT (Fri)
To: Fahlman@Cmu-20c
Cc: common-lisp@SU-AI
In-Reply-To: Your message of Friday, 3 September 1982  13:20-EDT

  I've gotten 41 responses so far on my survey.  These responses were all to
the franz-friends poll and most were from the arpanet.   I think that many
Franz Lisp users are reachable only via uucp,  yet I haven't gotten any
responses from a uucp site.   I've pretty sure that our recent hardware
problems prevented the poll from reaching uucp land.  I will try sending the
poll again just to uucp sites.

  Our 'Mail' machine has been up for over a day now (a modern record) but
there still seem to be problems (I've received three copies of the last
letter I sent).  I apologize in advance for the number of copies of this
message you may receive.




∂03-Sep-82  1519	Guy.Steele at CMU-10A 	REDUCE function re-proposed
Date:  3 September 1982 1756-EDT (Friday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  REDUCE function re-proposed

I would like to mildly re-propose the REDUCE function for Common
LISP, now that adding it would require only one new function, not ten
or fifteen:


REDUCE function sequence &KEY :START :END :FROM-END :INITIAL-VALUE
    The specified subsequence of "sequence" is reduced, using the "function"
    of two arguments.  The reduction is left-associative, unless
    :FROM-END is not false, in which case it is right-associative.
    If the an :INITIAL-VALUE is given, it is logically placed before the
    "sequence" (after it if :FROM-END is true) and included in the
    reduction operation.  If no :INITIAL-VALUE is given, then the "sequence"
    must not be empty.  (An alternative specification: if no :INITIAL-VALUE
    is given, and "sequence" is empty, then "function" is called with
    zero arguments and the result returned.  How about that?  This idea
    courtesy of Dave Touretzky.)

    (REDUCE #'+ '(1 2 3 4)) => 10
    (REDUCE #'- '(1 2 3 4)) => -8
    (REDUCE #'- '(1 2 3 4) :FROM-END T) => -2   ;APL-style
    (REDUCE #'LIST '(1 2 3 4)) => (((1 2) 3) 4)
    (REDUCE #'LIST '(1 2 3 4) :FROM-END T) => (1 (2 (3 4)))
    (REDUCE #'LIST '(1 2 3 4) :INITIAL-VALUE 'FOO) => ((((FOO 1) 2) 3) 4)
    (REDUCE #'LIST '(1 2 3 4) :FROM-END T :INITIAL-VALUE 'FOO)
				 => (1 (2 (3 (4 FOO))))

--Guy

∂03-Sep-82  1520	Guy.Steele at CMU-10A    
Date:  3 September 1982 1803-EDT (Friday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI


- - - - Begin forwarded message - - - -
Mail-From: ARPANET host CMU-20C received by CMU-10A at 2-Sep-82 03:25:30-EDT
Mail-from: ARPANET site MIT-MC rcvd at 2-Sep-82 0324-EDT
Date: 2 September 1982 03:21-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject:  macro expansion
To: Fahlman at CMU-20C
cc: Steele at CMU-20C

    Date: Sunday, 29 August 1982  23:56-EDT
    From: Scott E. Fahlman <Fahlman at Cmu-20c>
    To:   Common-Lisp at SU-AI
    Re:   macro expansion

    ... The only quibble I have is whether we want to spell *MACROEXPAND-HOOK*
    with the stars.  We should only do this if we decide to spell all (or
    almost all) built-in global hooks this way...
-----
As I mentioned at the last common lisp meeting, I advocate the *...* naming
convention for two reasons:

 * It clearly identifies special variables. Makes it easy to tell which
   compiler "... undeclared, assumed special" warnings are worth worrying
   about.

 * It means that variables and functions are in different namespaces, which
   may be important in the package system you devise since currently there is
   no way to export only the value cell or only the function cell of a symbol.

-kmp
- - - - End forwarded message - - - -

∂03-Sep-82  1520	Guy.Steele at CMU-10A 	Backquote proposal per issue 99 
Date:  3 September 1982 1814-EDT (Friday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Backquote proposal per issue 99

Here is the backquote proposal per issue 99.  It is exactly
what is in the latest Common LISP Manual draft, except that
I have conquered the SCRIBE bug that caused backquotes not
to print.
--Guy
-------------------------------------------------------------------------

`  The backquote (accent grave) character makes it easier to write
   programs to construct complex data structures by using a template.
   As an example, writing

       `(cond ((numberp ,x) ,@y) (t (print ,x) ,@y))

   is roughly equivalent to writing

       (list 'cond
             (cons (list 'numberp x) y)
             (list* 't (list 'print x) y))

   The general idea is that the backquote is followed by a template, a
   picture of a data structure to be built.  This template is copied,
   except that within the template commas can appear.  Where a comma
   occurs, the form following the comma is to be evaluated to produce an
   object to be inserted at that point.  Assume B has the value 3, for
   example, then evaluating the form denoted by ```(A B ,B ,(+ B 1) B)''
   produces the result (A B 3 4 B).
   If a comma is immediately followed by an at-sign (``@''), then the
   form following the at-sign is evaluated to produce a list of objects.
   These objects are then ``spliced'' into place in the template.  For
   example, if X has the value (A B C), then

       `(x ,x ,@x foo ,(cadr x) bar ,(cdr x) baz ,@(cdr x))
          -> (x (a b c) a b c foo b bar (b c) baz b c)

   The backquote syntax can be summarized formally as follows.  For each
   of several situations in which backquote can be used, a possible
   interpretation of that situation as an equivalent form is given.
   Note that the form is equivalent only in the sense that when it is
   evaluated it will calculate the correct result.  An implementation is
   quite free to interpret backquote in any way such that a backquoted
   form, when evaluated, will produce a result EQUAL to that produced by
   the interpretation shown here.

      - `simple is the same as 'simple, that is, (QUOTE simple),
        for any form simple that is not a list or a general vector.

      - `,form is the same as form, for any form, provided that the
        representation of form does not begin with ``@'' or ``.''.
        (A similar caveat holds for all occurrences of a form after
        a comma.)

      - `,@form is an error.

      - `(x1 x2 x3 ... xn . atom) may be interpreted to mean
        (APPEND x1 x2 x3 ... xn (QUOTE atom)), where the underscore
        indicates a transformation of an xj as follows:

           * form is interpreted as (LIST `form), which contains a
             backquoted form that must then be further interpreted.

           * ,form is interpreted as (LIST form).

           * ,@form is interpreted simply as form.

      - `(x1 x2 x3 ... xn) may be interpreted to mean the same as
        `(x1 x2 x3 ... xn . NIL).

      - `(x1 x2 x3 ... xn . ,form) may be interpreted to mean
        (APPEND x1 x2 x3 ... xn form), where the underscore
        indicates a transformation of an xj as above.

      - `(x1 x2 x3 ... xn . ,@form) is an error.

      - `#(x1 x2 x3 ... xn) may be interpreted to mean (MAKE-VECTOR
        NIL :INITIAL-CONTENTS `(x1 x2 x3 ... xn)).

   No other uses of comma are permitted; in particular, it may not
   appear within the #A or #S syntax.
   Anywhere ``,@'' may be used, the syntax ``,.'' may be used instead to
   indicate that it is permissible to destroy the list produced by the
   form following the ``,.''; this may permit more efficient code, using
   NCONC instead of APPEND, for example.
   If the backquote syntax is nested, the innermost backquoted form
   should be expanded first.  This means that if several commas occur in
   a row, the leftmost one belongs to the innermost backquote.
   Once again, it is emphasized that an implementation is free to
   interpret a backquoted form as any form that, when evaluated, will
   produce a result that is EQUAL to the result implied by the above
   definition.  In particular, no guarantees are made as to whether the
   constructed copy of the template will or will not share list
   structure with the template itself.  As an example, the above
   definition implies that `((,A B) ,C ,@D) will be interpreted as if it
   were

       (append (list (append (list a) (list 'b) 'NIL)) (list c) d 'NIL)

   but it could also be legitimately interpreted to mean any of the
   following:

       (append (list (append (list a) (list 'b))) (list c) d)
       (append (list (append (list a) '(b))) (list c) d)
       (append (list (cons a '(b))) (list c) d)
       (list* (cons a '(b)) c d)
       (list* (cons a (list 'b)) c d)
       (list* (cons a '(b)) c (copylist d))

   (There is no good reason why COPYLIST should be performed, but it is
   not prohibited.)

∂03-Sep-82  1527	Kent M. Pitman <KMP at MIT-MC> 	More case stuff: speed and accuracy   
Date: 3 September 1982 18:23-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject: More case stuff: speed and accuracy
To: common-lisp at SU-AI

I would also be interested in knowing how many people advocating function
names like LoadByte over load-byte, etc. are touch typists and what their
typing speed is. I type I think reasonably fast (upwards of 50 words a minute)
and I note that it blocks my typing speed tremendously to have to shift.
The "-" key can be hit in sequence with other keys because it's only one thing
to have to toss in, but having to shift involves coordinating two fingers to
do something at exactly the same time and then making sure I get my finger off
the shift key in time for the following character. I would anticipate that
in a system that made case matter and that had mixed case variable names,
I'd have to type considerably slower. I would consider this a reasonably
expensive price to pay since I code very fast and if I know roughly what I
want to write, my coding speed may well be bounded by my typing speed. Slower
typists or fast typists who don't code very fast may not be bothered a whole
lot may not be bothered so much by this ... This is particularly critical in
interactive debugging on systems without a toplevel text editor built into
the lisp reader where I may have just held the shift key too long and have
to rub way back just to change the shift of a character that I typed wrong
without noticing ... Yes, I know, I could have dropped a dash or typed a wrong
key, but I claim that it's easier to make a case mistake than to hit a wrong
letter in a lot of cases just because the shift key involves synchronizing
two fingers and the other types of processes that go on in typing are problems
of sequential .. Certainly the frustration level of having typed
(Setq foo 3) and then finding that you'd left the shift on too long for the
"S" is worse than the frustration level of having doing (Aetq foo 3). The
former does not "feel" to me like it has as much right to be an error. The
latter is much easier to tolerate an error message about... 

∂03-Sep-82  1551	Guy.Steele at CMU-10A 	DLW query about STRING-OUT and LINE-OUT   
Date:  3 September 1982 1835-EDT (Friday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  DLW query about STRING-OUT and LINE-OUT

These functions were eliminated as a result of November issue 214.
--Guy

∂03-Sep-82  1739	JonL at PARC-MAXC 	Re: function specs   
Date: 3 Sep 1982 17:40 PDT
From: JonL at PARC-MAXC
Subject: Re: function specs
In-reply-to: dlw at SCRC-TENEX's message of Tuesday, 31 August 1982,
 11:21-EDT
To: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
cc: Fahlman at Cmu-20c, Common-Lisp at SU-AI

Not only do I like SEF's suggestion (use the SETF syntax instead of
introducing a whole new syntax for function specs), but it seems to
resolve two important issues:
  1) Symbols should be used as names, not other datatypes
  2) There must be a uniform way to cause a defined function to be stored
    at any reasonably accessible location, regardless of whether or not
    that function "has" a name

A key word in point 2 is "store".  Thus, use SETF syntax, inventing new
accessor functions where necessary.

It's a defect if there are "locations" of relevance in compiled code which
the user can't access.  Thus if some anonymous lambda expression causes 
an internally-generated function,  *and there is some need to "get hold"
of it after compilation*, then there should be appropriate accessor functions,
regardless of the function-specs controversy.  Possibly new accessor names
need not be invented, if there can be conventions established for the
storage of compiled code constructs, but this is a lower-level implementation
matter.

Also, I feel, it's misguided to throw out symbols as names simply because
of reaction to the uncomfortable wart in MacLisp engendered by
   (DEFUN (FOO BAZZAZ) (X) . . . )





∂03-Sep-82  1911	MOON at SCRC-TENEX 	REDUCE function re-proposed   
Date: Tuesday, 9 March 1982  22:12-EST
From: MOON at SCRC-TENEX
To: Guy.Steele at CMU-10A
Cc: common-lisp at SU-AI
Subject: REDUCE function re-proposed

Sure, put it in.  It is useful sometimes.

∂03-Sep-82  1912	MOON at SCRC-TENEX 	Backquote proposal per issue 99    
Date: Tuesday, 9 March 1982  22:10-EST
From: MOON at SCRC-TENEX
To: Guy.Steele at CMU-10A
Cc: common-lisp at SU-AI
Subject: Backquote proposal per issue 99

You just can't win.  This time you got the backquotes in but the underscores
went away.  Anyway, it looks like there isn't anything wrong with the specification;
I vote yes.

It was damned sneaky of you to use only numbers in your one nested-backquote
example, so that the reader would have no chance of figuring out when nor how
many times things wil be evaluated.  You should at least put in the example
from page 214 of the Chine Nual, since this issue confuses everyone.

∂03-Sep-82  2015	Guy.Steele at CMU-10A 	Backquote proposal    
Date:  3 September 1982 2315-EDT (Friday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Backquote proposal

Ooops, sorry about that.
Certainly better examples should be provided; and I apologize for the
underscores going away.  SCRIBE is only *mostly* device-independent.

Anyway, I want to point out that the main reason for the formal
definition is to make it very precise (if not clear) what nesting
of backquotes does, and also to guarantee that you can't get into
trouble nesting , and ,@ in odd ways.  In some present implementations
I believe it is possible to fake backquote out into producing
an expression that gives CONS the wrong number of arguments.
--Guy

∂03-Sep-82  2041	Guy.Steele at CMU-10A 	Clarification of closures and GO
Date:  3 September 1982 2339-EDT (Friday)
From: Guy.Steele at CMU-10A
To: jonl at PARC-MAXC
Subject:  Clarification of closures and GO
CC: common-lisp at SU-AI, moon@scrc-tenex at MIT-MC, hedrick at rutgers


  1) Glaaag, did the CL meeting *really* ditch the notion of being able
    to include special variables into closures?  I thought it merely decided 
    to extend the LispM syntax to include all lexically appearing variables 
    by default (sort of what NIL was proposing to do) which would mean 
    current LispM closures would be a subset of the future closures.

First of all, as has already been pointed out on the mailing list,
the word "closure" has been used to mean different things.  Here I will
always speak explicitly of "closures over lexical variables" and
"closures over special variables".

The CL meeting did *not* vote to get rid of either kind of closure.
It was to have been an item on the agenda at Hedrick's request, and I
apologize greatly for having overlooked it.  Subsequent to the meeting
the notion of eliminating closures over special variables has been
discussed on the net.  As nearly as I can tell, so far there is some
sympathy for eliminating them, and no ardent voices for retaining them.
However, a formal poll has not been conducted.

The current state of Common LISP with respect to closures is as follows:
bound variables that are not special are lexical (not local--if a compiler
can suitably determine that a lexical variable is not referred to from within
a nested lambda-expression, then it is welcome to treat it as local; this
static analysis can be performed at compile time).  Therefore the FUNCTION
construct containing a lambda-expression may need to construct a closure
over lexical variables.  Again, in the interpreter it is probably simplest
to close over all lexical variables, but compiled code can close over
only those variables that are needed (possibly none).

There is a separate construct, called CLOSURE, that constructs closures
over special variables.  It takes any function and an explicit list of
names of special variables, and constructs a closure over precisely those
special variables and no others.  It does not close over any lexical
variables whatsoever.  If you write
(DECLARE (SPECIAL B))
 ...
    (LET ((A 0) (B 1))
	(CLOSURE #'(LAMBDA (X Y) (+ A B X Y)) '(B)))
then the result of the CLOSURE operation indeed closes over both
A (a lexical variable) and B (a special variable), but for completely
independent reasons.  The use of FUNCTION implied by #' caused A to
be closed over before the CLOSURE operation ever executed.


  2) I can't believe we agreed to support full funarging.  That's incredible!
    Only the spaghetti stack does this "right", for the pseudo-mathematical
    concept you mention, and CL is nowhere near spaghetti yet.  The
    alternative is to ditch Lisp semantics and go for Scheme, and I hope
    there hasn't been any consensus on that!!!

Well, full lexical scoping as well as dynamic scoping has been agreed to;
along with CLOSURE, that's a full as I've ever seen any funarging get.
Even without CLOSURE, one still has closures over lexical variables,
which I suppose is what some people mean by "full funarging".  It is
debatable whether spaghetti stacks "do it right" (I would refer you
to my 1977 paper "Macaroni is Better than Sphagetti", except
that I don't believbe that Macaroni *is* better; nevertheless that
paper contains a critique of spaghetti stacks that may be relevant).
Even if they do "do it right", it is debatable whether that is the only
correct model.

I'm sure that Common LISP hasn't gone for SCHEME, and I suspect the T folks
at Yale would back me up there.  While Common LISP, as currently defined,
supports lexical scoping, it also supports dynamic scoping in pretty
much the traditional style, and furthermore differs from SCHEME in having
separate value and function cells.

  3) Yea, my myopic misreading of Hedrick caused me to say issue #68 (and
    #62) when I meant #8.  But again, the consensus was to allow any local
    GO (local includes all the cases in issue #8 except from within funargs);
    Non-local GO's, such as could be the case from within funargs, were
    certainly argued against, so I'd hardly say there was consensus on this
    point;  wouldn't such non-local GO's ultimately imply spaghetti also?

The consensus of the recent meeting was that PROG tags, to which GO's
refer, would have lexical scope and *dynamic extent*.  The effect of that
last phrase is to make them behave like a CATCH followed by a local GO.
That is, once you have exited a PROG by whatever means, its tags are
no longer valid and you may not GO to them.

∂03-Sep-82  2125	STEELE at CMU-20C 	Proposed definition of SUBST   
Date:  4 Sep 1982 0025-EDT
From: STEELE at CMU-20C
Subject: Proposed definition of SUBST
To: common-lisp at SU-AI

Here is a tentative definition of SUBST for inclusion in the white pages:
(defun subst (old new tree @key (test #'eql testp)
		                (test-not nil test-not-p)
				(key #'(lambda (x) x)))
  (cond (test-not-p
	 (if testp @r[<signal-an-error>]
	     (subst old new tree
		    :test #'(lambda (x y) (not (funcall test-not x y)))
		    :key key)))
	((atom tree)
	 (if (funcall test old tree) new tree))
	(t (let ((a (subst old new (car tree) :test test :key key))
		 (d (subst old new (cdr tree) :test test :key key)))
	     (if (and (eq a (car tree)) (eq d (cdr tree)))
		 tree
		 (cons a d))))))

I had two problems.  One is that :TEST-NOT is a pain to implement.
Either I have to use the subterfuge I did here, which is clumsy or slow,
or else I have to implement SUBST twice, once for the version that
uses TEST and once for the one that uses TEST-NOT.  Actually, what
I need is a function XOR, taking two boolean values (NIL/not-NIL)
and returning their xor, so that I can pass a TEST-NOT-P flag and
xor it with the result of the predicate.
The other problem is that I wish there were a standard identity function
for use in initializing the :KEY parameter.  (I refuse to use VALUES!)
Any suggestions on this or other aspects of the above definition?
--Guy
-------

∂03-Sep-82  2134	STEELE at CMU-20C 	Another try at SUBST 
Date:  4 Sep 1982 0034-EDT
From: STEELE at CMU-20C
Subject: Another try at SUBST
To: common-lisp at SU-AI

How about this one?

(defun subst (old new tree @key test test-not key)
  (cond ((atom tree)
	 (if (satisfies-the-test old tree :test test :test-not test-not :key key)
	     new tree))
	(t (let ((a (subst old new (car tree) :test test :key key))
		 (d (subst old new (cdr tree) :test test :key key)))
	     (if (and (eq a (car tree)) (eq d (cdr tree)))
		 tree
		 (cons a d))))))

(defun satisfies-the-test (x y @key test test-not key)
  (if key
      (if test
	  (if test-not
	      <signal-error>
	      (funcall test x (funcall key y)))
	  (if test-not
	      (not (funcall test x (funcall key y)))
	      (eql x (funcall key y))))
      (if test
	  (if test-not
	      <signal-error>
	      (funcall test x y))
	  (if test-not
	      (not (funcall test x y))
	      (eql x y)))))

Actually, SATISFIES-THE-TEST might be useful to define for user use?
--Guy
-------

∂03-Sep-82  2139	STEELE at CMU-20C 	Flying off the handle: one more time on SUBST 
Date:  4 Sep 1982 0038-EDT
From: STEELE at CMU-20C
Subject: Flying off the handle: one more time on SUBST
To: common-lisp at SU-AI

The last (second) try was buggy; here's a corrected version.
Note the use of &REST with &KEY, and the slick use of APPLY:

(defun subst (old new tree @rest x @key test test-not key)
  (cond ((atom tree)
	 (if (satisfies-the-test old tree :test test
				 :test-not test-not :key key)
	     new tree))
	(t (let ((a (apply #'subst old new (car tree) x))
		 (d (apply #'subst old new (cdr tree) x)))
	     (if (and (eq a (car tree)) (eq d (cdr tree)))
		 tree
		 (cons a d))))))

--Guy
-------

∂03-Sep-82  2136	MOON at SCRC-TENEX 	Agenda Item 74: Interaction of BLOCK and RETURN   
Date: Saturday, 4 September 1982  00:28-EDT
From: MOON at SCRC-TENEX
To: Common-lisp at su-ai
cc: alan at SCRC-TENEX, dla at SCRC-TENEX
Subject:Agenda Item 74: Interaction of BLOCK and RETURN

At the meeting we commissioned GLS and SEF to make a proposal about this.
But in the meantime Bawden has come up with a good proposal, hence I am
mailing off this writeup of it.

The problem:
Blocks, named and unnamed, are used to label a point to which the RETURN
special form will transfer control.  Some special forms are defined and
documented to create a block, thus RETURN may be used inside of them.  PROG,
DO, and DOLIST are examples.  Some of these special forms might be implemented
as macros, expanding into simpler special forms such as BLOCK or PROG.  It's
easy to see how to implement DOLIST this way, for example.

Sometimes one has a macro that needs to transfer control internally within
the code it writes, using BLOCK and RETURN or the equivalent.  However,
this block is purely the internal business of this macro and should not be
visible to the user.  This is different from DOLIST, where the user is told
that there is a block there.  If the macro takes a body, and the user
writes RETURN inside that body, the RETURN should return from the block the
user expects it to return from; it shouldn't be affected by the internal
block written by the macro.  The classic example of this occurs when a
compiler optimizes (MAPCAR #'(LAMBDA ...)) by translating it into an
iteration, rather than breaking off a separate function and passing it to
MAPCAR.  The iteration is done with PROG or DO, which creates a block.  If
RETURN was used inside the body of the lambda, it should not return
unexpectedly from the MAPCAR, it should return from some enclosing form,
assuming we are using a full-lexical-scoping language.  Another example is
an error-handling macro that generates code something like:
	(PROG () (CATCH tag (RETURN body))
		 error-body)
Here the idea is that in the normal case we want to return the value(s) of body
from the form; but if a throw occurs, we want to go off and do error-body.
However, the user might well write a RETURN inside body, and this RETURN should
not be captured by the PROG, which the user has no idea is there.

Macros may generate blocks for GO as well as for RETURN.  The MAPCAR example
above was an example of this.

A solution that doesn't quite work:
The 29July Common Lisp manual, on page 72, adopts a solution to this from
the Lisp machine.  The RETURN function returns from the innermost enclosing
block, named or unnamed, except that it ignores blocks named T.  Thus macros
that need to generate blocks that are invisible to the user just name them
T, and return from them with (RETURN-FROM T value).

There are two problems with this.  One is that the named-PROG and named-DO
features have been flushed from Common Lisp; only BLOCK can have a name.
This means that it is impossible to create something that loops but is
invisible to RETURN.  The other problem is that sometimes it is necessary
to nest invisible blocks.  There is no way to nest two invisible blocks
and put a return from the outer one inside both of them, since they both
have to have the same name (T).  This problem shows up as two macros that
work separately, but mysteriously generate wrong code when used together.

The proposed solution:
Define (RETURN value) to be (RETURN-FROM NIL value); allow RETURN to return
from unnamed blocks only.  Require RETURN-FROM to be used to return from
named blocks.  Thus to make a block which is invisible to RETURN, you just
give it a name.  You can choose a unique name for a macro-generated block
the same way you would for a macro-generated variable, perhaps by using
GENSYM.

This is incompatible with the Lisp machine, where RETURN returns from named
blocks as well as unnamed ones.  However, it isn't really incompatible since
all the Lisp machine ways to create a named block have been flushed from
Common Lisp, and the BLOCK special form is new.  In the Lisp machine, we can
keep around a compatibility kludge for a while, where named PROGs and DOs generate
two blocks, one named and one not named, unless the name is T, thus getting
the old behavior.

The other problem is that we need a way for macros to perform iteration without
"capturing" RETURN.  This is handled by introducing a new special form, which
is a "naked" PROG body without an associated block and without variable bindings.
I don't know of a good name for this, but the name doesn't matter much since
only macros that implement new control-structures should be using it.
The name could be GO-BODY, meaning a body with GOs and tags in it, or
PROG-BODY, meaning just the inside part of a PROG, or WITH-GO, meaning
something inside of which GO may be used.  I don't care; suggestions anyone?

(GO-BODY &body <body>)

<body> is treated like a prog body is now.  Symbols are labels and you can use
GO to branch.  GO-BODY always returns NIL (there are NO exceptions).

Now we can flush PROG as a special form and write it as a macro:

(defmacro prog (first &rest rest)
  (cond ((listp first)	;assuming we fix it so that (listp nil) => t
	 `(let ,first
	    (block nil
	      (go-body ,@rest))))
	((eq first t)
	 `(let ,(car rest)
	    (block t
	      (go-body ,@(cdr rest)))))
	(t
	 `(let ,(car rest)
	    (block ,first
	      (block nil
		(go-body ,@(cdr rest))))))))

∂03-Sep-82  2150	Skef Wholey <Wholey at CMU-20C> 	Proposed definition of SUBST, standard identity function 
Date: Saturday, 4 September 1982  00:50-EDT
From: Skef Wholey <Wholey at CMU-20C>
To:   STEELE at CMU-20C
Cc:   common-lisp at SU-AI
Subject: Proposed definition of SUBST, standard identity function

While implementing the sequence functions, I defined an Identity function for
the sole purpose of defaulting the :key parameters.  Since Identity is very
useful in general and for :key parameters in particular, and since it is an
extremely well-defined, extremely simple function, I think it would be worth
adding to the language.

--Skef

∂03-Sep-82  2202	Scott E. Fahlman <Fahlman at Cmu-20c> 	Proposed definition of SUBST   
Date: Saturday, 4 September 1982  01:02-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   STEELE at CMU-20C
Cc:   common-lisp at SU-AI
Subject: Proposed definition of SUBST


Maybe the right move is to eliminate the :TEST-NOT option for SUBST.
All you really want here is some sort of equality test, so :TEST-NOT
makes no real sense here.  Don't we have some precedents for this?

How about if we call the identity function IDENTITY ?

About REDUCE:  I'm the one (or one of the ones) who complained about it
before, when we had 600 or so sequence functions and didn't need another
dozen.  I now withdraw my former objection.  I still think it's slightly
confusing, but any user who can digest lexical scoping is ready for
this.

-- Scott

∂03-Sep-82  2224	Guy.Steele at CMU-10A 	SUBST  
Date:  4 September 1982 0124-EDT (Saturday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  SUBST

Well, perhaps SUBST should not have TEST-NOT, but other functions do.
I was just trying to illustrate how clumsy implementing TEST-NOT could be,
at least if you go about it the wrong way.  I am now less concerned.
(I admit to not having scanned Skef's code carefully -- sorry.)
--Guy

∂03-Sep-82  2307	Kent M. Pitman <KMP at MIT-MC> 	Writing PROG as a macro
Date: 4 September 1982 02:03-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject:  Writing PROG as a macro
To: MOON at SCRC-TENEX
cc: COMMON-LISP at SU-AI

I agree whole-heartedly with your suggestion that (PROG T ...) be replaced with
the requirement that named RETURNs match only named PROGs and un-named 
RETURNs match un-named PROG. I have always felt the special-casing of 
T to be inelegant and I think this offers just the right degree of
control.

Also, since you brought up the idea of lower level primitives to implement
PROG with, I dredge up here thoughts I presented on the subject years back...

-----Begin Forwarded Message Portions-----
Date: 12 APR 1980 2054-EST
From: KMP at MIT-MC (Kent M. Pitman)
Subject: What should we DO?
To: (BUG LISP) at MIT-MC, NIL at MIT-MC, (BUG LISPM) at MIT-MC
CC: H at MIT-MC, KMP at MIT-MC, HENRY at MIT-MC, RMS at MIT-MC
CC: MOON at MIT-MC

... I now present my feelings on this issue of how DO/PROG could be done in
order this haggling, part of which I think comes out of the fact that these
return tags are tied up in PROG-ness and so on ... Suppose you had the
following primitives in Lisp:

(PROG-BODY ...) which evaluated all non-atomic stuff. Atoms were GO-tags.
 Returns () if you fall off the end. RETURN does not work from this form.

(PROG-RETURN-POINT form name) name is not evaluated. Form is evaluated and
 if a RETURN-FROM specifying name (or just a RETURN) were executed, control
 would pass to here. Returns the value of form if form returns normally or
 the value returned from it if a RETURN or RETURN-FROM is executed. [Note:
 this is not a [*]CATCH because it is lexical in nature and optimized out
 by the compiler. Also, a distinction between NAMED-PROG-RETURN-POINT
 and UNNAMED-PROG-RETURN-POINT might be desirable -- extrapolate for yourself
 how this would change things -- I'll just present the basic idea here.]

(ITERATE bindings test form1 form2 ...) like DO is now but doesn't allow
 return or goto. All forms are evaluated. GO does not work to get to any form
 in the iteration body.

So then we could just say that the definitions for PROG and DO might be
(ignore for now old-DO's -- they could, of course, be worked in if people
really wanted them but they have nothing to do with this argument) ...

 (PROG [ <tag> ] <bvl> . <body>)

  => (PROG-RETURN-POINT (LET <bvl> (PROG-BODY . <body>)) [ <tag> ])

 (DO [ <tag> ] <bind-specs> <tests> . <body>)

  => (PROG-RETURN-POINT (ITERATE <bind-specs> <tests> (PROG-BODY . <body>))
			[ <tag> ])

Other interesting combinations could be formed by those interested in them.
If these lower-level primitives were made available to the user, he needn't
feel tied to one of PROG/DO -- he can assemble an operator with the 
functionality he really wants....
-----
Date: 15 April 1980 00:40-EST
From: "Guy L. Steele, Jr." <GLS at MIT-MC>
Subject: What should we DO?
To: BUG-LISP at MIT-MC, NIL at MIT-MC, BUG-LISPM at MIT-MC
cc: KMP at MIT-MC, H at MIT-MC, HENRY at MIT-MC, RMS at MIT-MC,
    MOON at MIT-MC

... Seriously, folks, my own position on DO and friends is largely in agreement
with KMP here.  His PROG-RETURN-POINT is simply the lexical catch advocated
by DLW, with allowances for how RETURN could be expressed in terms of that.
It is of interest to note that the S-1 NIL compiler in fact
implements a construct called PROG-BODY with precisely those semantics;
PROG is then turned into a nested LET and PROG-BODY.  This was done
to concentrate all knowledge of variable bindings into one place --
the code that handles LAMBDA.  The original intent was just to use
this construct internally to the compiler, but indeed it may be a useful
building-block for other macros.
-----End Forwarded Message Portions-----

∂03-Sep-82  2332	Kent M. Pitman <KMP at MIT-MC> 	Proposed definition of SUBST
Date: 4 September 1982 02:28-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject:  Proposed definition of SUBST
To: COMMON-LISP at SU-AI

    Date: Saturday, 4 September 1982  01:02-EDT
    From: Scott E. Fahlman <Fahlman at Cmu-20c>

    Maybe the right move is to eliminate the :TEST-NOT option for SUBST.
    All you really want here is some sort of equality test, so :TEST-NOT
    makes no real sense here.  Don't we have some precedents for this?...
-----
Why not just flush all :TEST-NOTs and make a primitive COMPLEMENT as:

(DEFUN COMPLEMENT (FN) #'(LAMBDA (&REST STUFF) (NOT (APPLY FN STUFF))))

a smart compiler could generate fairly good code for this and in some
cases literal translations from things like (COMPLEMENT #'EQ) to #'NEQ could
be done, etc. I suspect the constant argument case will occur very often
so this optimization will be a very productive one. Then people would just
write :TEST (COMPLEMENT #'EQ) if it mattered to them to have the opposite 
test.

T (Yale Scheme) has this. It's much more general than :TEST-NOT (has 
many more uses), simplifies the internal code of many system functions, 
and simplifies the language definition.

-kmp

∂04-Sep-82  0608	MOON at SCRC-TENEX 	Clarification of full funarging and spaghetti stacks   
Date: Saturday, 4 September 1982  04:21-EDT
From: MOON at SCRC-TENEX
to: common-lisp at SU-AI
Subject: Clarification of full funarging and spaghetti stacks

Okay, here's the big question.  Do closures over lexical variables have
dynamic extent or indefinite extent?  In other words, do we have upward
funargs, or only downward funargs with full access to lexical variables?

∂04-Sep-82  0659	TK at MIT-MC   
Date: Friday, 3 September 1982  20:12-EDT
Sender: TK at MIT-OZ
From: TK at MIT-MC
To:   common-lisp at sail

Date: 3-Sep-82 09:08:30-PDT (Fri)

	From: ucbvax:<Kim:jkf> (John Foderaro)
	Subject: cases, re: kmp's and moon's mail

	Kent says that he has trouble remembering cases in words.  I can
	accept that but Kent should not generalize and say that this is try of
	all people.
I would have no trouble remembering any CONSISTENT set of conventions
for labelling word boundaries.  Common lisp, for better or worse, has chosen
to label word boundaries in symbol names with a hypen.  The function name
is load-byte, not loadbyte, not LoadByte, not Load←byte, not load!byte,
or anything else.  If there is ONE convention on how we separate words, then
talking about the separations is easy.  That's why you can discuss mixed
case without getting confused.  And why you can remember it.  If you used
a mixture of the above conventions, then you would find it impossible to
talk about mixed case, too.  We are really talking [partly] about what to use 
as the word delimiter, not about case at all.  Having one delimiter is much
better than having five.

	However, I write a piece of code which is useful enough to be added to
	the public code, then that code will be translated to lower case
	before being added.

This is exactly the point.  If case is distinguished, then this process
will be painful, bug-prone, and extra work for everyone.  The only alternative
is to avoid making symbols which are unique without case.  If you do that, then
the case being folded on input is not much of a burden, since the source code
can be as pretty [or ugly, your taste] as you want.
-------

∂04-Sep-82  1946	Guy.Steele at CMU-10A 	Mailing list
Date:  4 September 1982 2246-EDT (Saturday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Mailing list

I have already requested the removal of research!dbm@berkeley from
the COMMON-LISP mailing list. 

∂04-Sep-82  2012	Guy.Steele at CMU-10A 	Re: Clarification of full funarging and spaghetti stacks 
Date:  4 September 1982 2312-EDT (Saturday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Re: Clarification of full funarging and spaghetti stacks
In-Reply-To:  MOON@SCRC-TENEX's message of 4 Sep 82 03:21-EST

Well, Moon certainly hit the nail on the head.  The Colander draft
of the Common LISP Manual mentions lexical function parameters as
examples of things that have indefinite extent, contrasting with
ALGOL, in which they also have lexical scope but instead have dynamic
extent.  August issue 49 has confirmed that variables shall be
lexical and not local, but the question of extent was not addressed
or confirmed, and that is my fault.

∂07-Sep-82  1341	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	lambda
Date: Tuesday, 7 September 1982, 16:37-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: lambda
To: Killian at MIT-MULTICS
Cc: Common-Lisp at SU-AI
In-reply-to: The message of 1 Sep 82 15:57-EDT from Earl A. Killian <Killian at MIT-MULTICS>

I remember that the issue of letting (lambda ...) evaluate had something
to do with the function/value dualism but I can't remember what.
Presumably GLS knows what is going on with this.

∂07-Sep-82  1350	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	case-sensitivity: an immodest proposal   
Date: Tuesday, 7 September 1982, 16:43-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: case-sensitivity: an immodest proposal
To: Common-Lisp at SU-AI

In reply to the mail from ucbvax:<Kim:jkf>:

Let me clarify what Moon said.  Masinter's proposal ALLOWS private code
in a CL dialect to be written in mixed case (and the case is
significant), but FORBIDS the existing ability to write portable code in
mixed case (and have the case be ignored).  Moon said:

                                It seems to me that all the modest
    proposal accomplishes (for portable programs) is not allowing them to
    be written in upper case, hardly an improvement.

So what we have is a tradeoff in which the proposal adds restrictions to
portable use of CL, in exchange for more flexible use of non-portable
CL.  This is not consonant with CL's primary goals.  For this reason
I would also like to go on record as being opposed to the proposal.

∂07-Sep-82  1500	Kim.fateman at Berkeley 	Another modest proposal  
Date: 7 Sep 1982 14:45:00-PDT
From: Kim.fateman at Berkeley
To: Common-Lisp@SU-AI
Subject: Another modest proposal

I tend to read these messages and then throw them out, but I think dlw
is wrong to say that Masinter's proposal forbids using full ascii in
portable code.  Just that the part that gets described in the yellow pages,
the interface, must be in lower case.  This can be accomplished by various
forms of synonymy (e.g. (putd 'foo(getd 'Foo)) to be crude).

The alternative which some people favor,
seems to be to throw away about half the useful characters,
and ignore the precedent from mathematics (which typically uses upper and
lower cases, plus Greek, Cyrillic, Hebrew, plus tilde's asterisks, subscripts,
superscripts).

In the spirit of this latter proposal, I suggest throwing away the characters
I,l,O,0,\,`,|,',/, because they obviously contribute to confusion;
I propose making the keys normally ascribed to 9 and 0 used
for ( and ) respectively, eliminating the need to shift.
Numbers can be written in octal, so 9 is unnecessary.  The absense of the 0
can be remedied by a set of sequence-shifting operations on strings of digits.

∂07-Sep-82  1513	Guy.Steele at CMU-10A 	Re: REDUCE function re-proposed 
Date:  7 September 1982 1754-EDT (Tuesday)
From: Guy.Steele at CMU-10A
To: Daniel L. Weinreb <dlw@SCRC-TENEX at MIT-MC>
Subject:  Re: REDUCE function re-proposed
CC: common-lisp at SU-AI
In-Reply-To:  Daniel L. Weinreb@SCRC-TENEX@MIT-MC's message of 7 Sep 82
             15:48-EST

Your objection is well-taken.  The use of the zero-argument
case does not always work; it is merely intended for convenience,
to allow omission of :INITIAL-VALUE in the common cases.
It allows some situations to work which otherwise would be
errors.  If the function can't accept zero arguments, all that
differs is that you get an error about wrong number of
arguments rather than about an empty sequence to REDUCE.

∂07-Sep-82  1513	Kim.fateman at Berkeley 	Another modest proposal  
Date: 7 Sep 1982 14:45:00-PDT
From: Kim.fateman at Berkeley
To: Common-Lisp@SU-AI
Subject: Another modest proposal

I tend to read these messages and then throw them out, but I think dlw
is wrong to say that Masinter's proposal forbids using full ascii in
portable code.  Just that the part that gets described in the yellow pages,
the interface, must be in lower case.  This can be accomplished by various
forms of synonymy (e.g. (putd 'foo(getd 'Foo)) to be crude).

The alternative which some people favor,
seems to be to throw away about half the useful characters,
and ignore the precedent from mathematics (which typically uses upper and
lower cases, plus Greek, Cyrillic, Hebrew, plus tilde's asterisks, subscripts,
superscripts).

In the spirit of this latter proposal, I suggest throwing away the characters
I,l,O,0,\,`,|,',/, because they obviously contribute to confusion;
I propose making the keys normally ascribed to 9 and 0 used
for ( and ) respectively, eliminating the need to shift.
Numbers can be written in octal, so 9 is unnecessary.  The absense of the 0
can be remedied by a set of sequence-shifting operations on strings of digits.

∂07-Sep-82  1521	Guy.Steele at CMU-10A 	forgot to CC this
Date:  7 September 1982 1814-EDT (Tuesday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  forgot to CC this


- - - - Begin forwarded message - - - -
Date:  7 September 1982 1812-EDT (Tuesday)
From: Guy.Steele at CMU-10A
To: Kim.fateman at UCB-C70
Subject:  Re: Another modest proposal
In-Reply-To:  Kim.fateman@Berkeley's message of 7 Sep 82 16:45-EST

While I am not unsympathetic to the desires to exploit case,
I am slightly unpersuaded by the arguments from mathematics.
The conventions that arose around mathematics may perhaps have
arisen from the need for brevity (not in itself a bad thing)
due to the necessity of hand transcription, because one didn't
have computers to help one manipulate formulas.  If you had to
copy
a formula over thirty times as you simplified or otherwise dealt
with it, you would of course try to invent very consice symbols.
Also, mathematics has historically used adjacency to imply
multiplication in most contexts, ruling out multi-character
variables names.
APL is in a curious halfway poisition in this regard.  The user may have
multi-character names, but built-in functions, as a conventional
rule, may not!  Actually, they can, but all the characters must be
all written in one character position.  They ran out of pretty overstrikes
some time ago.  Recently, some APL implementations have burst forth
and provided system functions with multi-character names
beginning with the quad character.
If you only have one character, then mentioning all its
attrributes (case, boldness, hats and tildes, etc.) is not
so bad.  SUppose you use only five or six such attributes
in a very complex paper; then any name is at worst something
like "the vector capital A hat prime tilde".  But if a name
can be many characters long, having seperate attributes for
each character explodes the number of possibilities, making
it ridiculous, as KMP has observed, to try to speak them,
unless there is enough self-control to avoid using the
attributes after all.
You would go nuts if you tried to distinguish bold, italic, and
ordinary tildes and hats.
--Guy
- - - - End forwarded message - - - -

∂07-Sep-82  1551	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	REDUCE function re-proposed    
Date: Tuesday, 7 September 1982, 16:48-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: REDUCE function re-proposed
To: Guy.Steele at CMU-10A, common-lisp at SU-AI
In-reply-to: The message of 3 Sep 82 17:56-EDT from Guy.Steele at CMU-10A

Sounds good.  Regarding DT's proposed variant, the problem is that it
only works with functions that can happily accept zero arguments; your
examples used +, -, and LIST, which do accept zero arguments as well as
two, but other useful functions might not accept zero.  Common Lisp is
not especially trying to make everything work with zero arguments in all
cases.  One might argue that in all the functions that are reasonable
first arguments to REDUCE do, in fact, take any number of arguments,
because that's the kind of function they are, but I'd have to see such
an argument developed more fully before I'd be convinced.

∂07-Sep-82  1641	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	dlw's portability semantics    
Date: Tuesday, 7 September 1982, 16:39-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: dlw's portability semantics
To: Kim.fateman at UCB-C70, common-lisp at su-ai
In-reply-to: The message of 2 Sep 82 22:50-EDT from Kim.fateman at Berkeley

Of course, making it easier to move previously working non-CL code into
a CL implementation is a useful thing.  If you want to propose a way to
allow the running of programs with incompatible definitions of LOOP, or
DO, or CAR, then we could have some such facility.  It has nothing to do
with LOOP, though.

∂07-Sep-82  1641	Jim Large <LARGE at CMU-20C> 	case flames    
Date: Tuesday, 7 September 1982  19:40-EDT
From: Jim Large <LARGE at CMU-20C>
To:   Common-Lisp at SU-AI
Subject: case flames


   Here is an excerpt from a recent post on a local bboard at C-MU
which contained advice for puzzled Unix novices.

    
    As it turns out, the padding in the /etc/termcap definition of
    "concept" is optimized for 1200 baud...

    /etc/termcap contains a definition for "Concept" (note the
    capital C) that is adequate for 9600 baud.


   This kind of thing can not be stopped by widely proclaiming that 
"Its not good style",  "It won't be portable",  etc.  But it can be 
prevented by making it impossible to do.
							Jim Large

∂07-Sep-82  1648	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	DLW query about STRING-OUT and LINE-OUT  
Date: Tuesday, 7 September 1982, 16:52-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: DLW query about STRING-OUT and LINE-OUT
To: Guy.Steele at CMU-10A, common-lisp at SU-AI
In-reply-to: The message of 3 Sep 82 18:35-EDT from Guy.Steele at CMU-10A

    Date:  3 September 1982 1835-EDT (Friday)
    From: Guy.Steele at CMU-10A

    These functions were eliminated as a result of November issue 214.
    --Guy

Hmm.  My notes show that there was more sentiment to not flush them than
to flush them, especially on the part of you and me.  But if you say
that's what we decided in the meeting, then so be it.  I will probably
push for having them in the Lisp Machine anyway as extensions.

∂07-Sep-82  1648	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	Clarification of full funarging and spaghetti stacks    
Date: Tuesday, 7 September 1982, 16:57-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: Clarification of full funarging and spaghetti stacks
To: MOON at SCRC-TENEX at MIT-MC, common-lisp at SU-AI
In-reply-to: The message of 4 Sep 82 04:21-EDT from MOON at SCRC-TENEX

I thought we had already decided this.  Amazing how communication can be
so imperfect even when one works so hard at it.  Anyway, I am in favor
of giving all closures indefinite extent, i.e. allowing upward funargs.
By the way, the paper in the Lisp Conference proceedings says that we
have already decided the issue in this direction; it says that Common
Lisp solves the entire "funarg problem" including upward funargs.

∂07-Sep-82  2023	Scott E. Fahlman <Fahlman at Cmu-20c> 	DLW query about STRING-OUT and LINE-OUT  
Date: Tuesday, 7 September 1982  23:22-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Cc:   common-lisp at SU-AI, Guy.Steele at CMU-10A
Subject: DLW query about STRING-OUT and LINE-OUT


I too would like to retain STRING-OUT and LINE-OUT.  What were the
arguments for flushing them?

-- Scott

∂07-Sep-82  2048	Richard E. Zippel <RZ at MIT-MC> 	Another modest proposal   
Date: 7 September 1982 23:44-EDT
From: Richard E. Zippel <RZ at MIT-MC>
Subject:  Another modest proposal
To: Common-Lisp at SU-AI

This is really getting out of hand.  As a former mathematician, I'd like to
point out that mathematicians tend to use very brief symbols for quantities
because they want to express a huge amount of information in very few
characters.  This is accomplished with the aid of an enormous amount of
context and tradition that is lacking in computer programs.  Ponder whether
you would prefer programmers to use ``ENTROPY'' instead of ``H'' (where it
could be confused with Planck's constant).  Perhaps we should flush the EXPT
function and rely on super scripts?

∂07-Sep-82  2126	Scott E. Fahlman <Fahlman at Cmu-20c> 	Vote on Cases   
Date: Wednesday, 8 September 1982  00:26-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Common-Lisp at SU-AI
Subject: Vote on Cases


I think that we are all sick to death of this case-sensitivity issue.
JKF, who brought the whole matter up, has asked us to make a decision,
and I guess it's time to do this.  I had hoped to have the poll of Franz
Lisp users as input before we got down to voting, but there seem to be
delays in getting this assembled.  In any event, I think we have all had
a chance to ask any nearby unix folks and/or mathematicians what they
think, we have heard all the arguments, and I think each of us has a
pretty clear opinion by now.

The live options seem to be:

1. Retain the case-insensitive status quo, along with a switch to print
symbols out in lower-case for users who like to see things that way.
More precisely, characters are by default converted to upper-case on
read-in, and then intern does observe case.

2. Add a switch to make the reader case-sensitive and, when in
case-insensitive mode, convert things to lower-case instead of
upper-case so that the case-sensitive folks don't have to type things in
in all-caps.

3. Make the system case-sensitive always, but require that all symbols
in white-pages or yellow-pages code be entirely in lower case.

**********************************************************************

Speaking for the Spice Lisp, Vax/VMS Common Lisp, and Vax/Unix Common
Lisp implementation efforts, I vote for option 1.  I should also say
that only an overwhelming show of support by the other implementors for
option 2 or 3 is likely to budge us from this position.  I honestly do
not believe that a significant number of Unix people will stay away from
Common Lisp because of this.

-- Scott

∂07-Sep-82  2236	Scott E. Fahlman <Fahlman at Cmu-20c> 	Array proposal (long msg) 
Date: Wednesday, 8 September 1982  01:35-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: Array proposal (long msg)


At the 8/21 meeting, we more or less decided on a general sort of
array/vector scheme and I volunteered to work out a detailed proposal.
The reason this has taken so long is that when I tried to work out the
details of what we had discussed (or what I THOUGHT we had discussed), I
ran into some inconsistencies.  The following proposal, then, is based
rather loosely on the 8/21 discussion.

I propose the following:

Arrays can be 1-D or multi-D.  All arrays can be created by MAKE-ARRAY
and can be accessed with AREF.  Storage is done via SETF of an AREF.
1-D arrays are special, in that they are also sequences, and can be
referenced by ELT.  Also, only 1-D arrays can have fill pointers.
Suppose we use the term VECTOR to refer to all 1-D arrays, since that is
what "vector" means in the vernacular.

Vectors can be specialized along several distinct axes.  The first is by
the type of the elements, as specified by the :TYPE keyword to
MAKE-ARRAY (actually, I would much prefer :ELEMENT-TYPE as the keyword
for this option, since :TYPE is confusing here).  A vector whose
element-type is STRING-CHAR is referred to as a STRING.  Strings, when
they print, use the ".." syntax; they also are the legal inputs to a
family of string-functions, as defined in the manual.  A vector whose
element-type is BIT (alias (MOD 2)), is a BIT-VECTOR.  These are special
because they form the set of legal inputs to the boolean bit-vector
functions.  (We might also want to print them in a strange way -- see
below.)

Some implementations may provide a special, highly efficient
representation for simple vectors.  (These are the things we were
tentatively calling "quick arrays" at the meeting -- I think "simple
vector" is a much better name.)  A simple vector is (of course) 1-D,
cannot have a fill pointer, cannot be displaced, and cannot be altered
in size after its creation.  To get a simple vector, you use the :SIMPLE
keyword to MAKE-ARRAY (or MAKE-STRING, etc.) with a non-null value.  If
there are any conflicting options specified, an error is signalled.  If
an implementation does not support simple vectors, this keyword/value is
ignored except that the error is still signalled on inconsistent cases.

We need a new set of type specifiers for simple things: SIMPLE-VECTOR,
SIMPLE-STRING, and SIMPLE-BIT-VECTOR, with the corresponding
type-predicate functions.  Simple vectors are referenced by the usual
forms (AREF, CHAR, BIT), but the user may use THE or DECLARE to indicate
at compile-time that the argument is simple, with a corresponding
increase in efficiency.  Implementations that do not support simple
vectors ignore the "simple" part of these declarations.

Strings (simple or non-simple) would self-eval; all other arrays would
cause an error when passed to EVAL.  EQUAL would descend into strings,
but not into any other arrays.  EQUALP would descend into arrays of all
kinds, comparing the corresponding elements with EQUALP.  EQUALP would
be false if the array dimensions are not the same, but would not be
sensitive to the element-type of the array.

Completely independent of the above classifications is the question of
whether or not an array is normally printed.  If the :PRINT keyword to
MAKE-ARRAY has a non-null value, the array will try to print its
contents, subject to PRINLEVEL and PRINLENGTH-type constraints;
otherwise, the array would print as a non-readable object: #<array-...>.
I would suggest that if :PRINT is not specified, all vectors should
default to printing and all other arrays should default to non-printing.

Now the only problem is how to print these arrays.  If we want this
printing to preserve all features of the array (do we?) I think that the
only reasonable solution is to make the common cases print in a
nice-looking format and use #.(make-array...) for the rest.  Simple
strings could print in the double-quote syntax, simple-bit-vectors in
the #"..." format, simple vectors of element-type T could print as
#(...).  For arrays of type element-T, we could resurrect the #nA(...)
format, where n is the number of dimensions and the list contains the
elements, nested down n levels.  (I would not allow arbitrary sequences
here -- useless and confusing.)  The vector and array representations,
but not the string or bit-vector representaions, would observe PRINLEVEL
and PRINLENGTH.  Everything else would have to use #.(make-array ...),
unless we want to make up some really horrible new notation.

Alternatively, we could print everything in the nice form, but lose the
information on whether the original was simple and whether its
element-type is T or something more restrictive.  All strings, simple or
not, would print as "...", all bit vectors as #"...", all other vectors
as #(...), and all other arrays as #nA(...).  I would prefer this, but
it might turn out to be a big screw for some applications if these
notations did not preserve all of the state or the original object.

Opinions?

∂07-Sep-82  2341	UCB-KIM:jkf (John Foderaro) 	results of a case poll    
Date: 7-Sep-82 23:06:33-PDT (Tue)
From: UCB-KIM:jkf (John Foderaro)
Subject: results of a case poll
Message-Id: <8208080606.26883@UCB-KIM.BERKELEY.ARPA>
Received: by UCB-KIM.BERKELEY.ARPA (3.193 [9/6/82]) id a26883;
	7-Sep-82 23:06:37-PDT (Tue)
Received: from UCB-KIM.BERKELEY.ARPA by UCB-VAX.BERKELEY.ARPA (3.193 [9/6/82]) id a25688;
	7-Sep-82 23:32:17-PDT (Tue)
To: common-lisp@su-ai

 Here are the result of a poll I took of Franz Lisp users about their use of
case sensitivity:

		 Results of the case sensitivity poll.
As the letters came in, I assigned each of them a number.  Below I've listed
the results by number so that you can see how the answers correlated.
Some people only answered a few of the questions (some none at all).
The text of all the letters is at mit-mc in the file "ucb;cases text"
(and for Berkeley people it is in kim:~jkf/surv)

 1) Do you use the fact that Franz Lisp is case sensitive, that is do you
    use variables with capital letters?

    yes: 3,4,5,6,7,11,14,16,17,18,19,21,23,24,25,26,27,28,30,31,33,35,36,38
         40,41,42,43,45,47,48

    no: 8,9,10,12,13,34,37,46

    summary: yes: 31/39 = 79%  no: 8/39 = 21%
    
    If yes, do you ever have two different variables whose names differ only
    by capitalization?

     yes: 4,6,11,16,17,18,21,23,24,27,28,42,43,45,46,48
     no: 7,14,19,25,26,30,31,33,35,36,38,40

    summary: yes: 16/28 = 57%   no: 12/28 = 43%
    
    [When I refer to 'variable' I mean a symbol to which you assign a
    value (∞ to distinguish it from something used just for
    printing, as in (print 'Results). ]
    
 2) If a case-insensitive Common Lisp was the only lisp available on your
    machine would you:

    a) use it without complaint about the case-insensitivity

    b) ask the person in charge of Common Lisp at your site to add a switch
       to disable the code that maps all characters to the same case, thus
       making it possible for each user to make Common Lisp case-sensitive.
 
    a: 3,8,9,11,12,13,16,25,26,30,34,46
    b: 1,4,6,7,10,14,18,19,21,24,27,31,33,35,36,37,38,40,41,42,43,45,47,48

    summary: a: 12/36 = 33%     b: 24/36 = 67%
    
 3) Do you prefer an operating system to be case-sensitive (like Unix and
    Multics) or case-insensitive (like Tops-10, Tops-20, Tenex, etc etc).
 
    sensitive: 1,4,5,6,7,10,11,14,18,19,21,24,26,30,31,33,35,36,40,
	       42,43,45,47,48
    insensitive:  3,8,9,12,25,34,46

    summary: sensitive: 24/31 = 77%    insensitive: 7/31 = 23%


∂08-Sep-82  1018	RPG   via S1-GATEWAY 	Case vote    
To:   common-lisp at SU-AI  
As another 1/3 of the S-1 Lisp implementors (and the head of the project),
I vote for option 1.
			-rpg-

∂08-Sep-82  1012	Jonathan Rees <Rees at YALE> 	Vote on Cases  
Date: Wednesday, 8 September 1982  13:06-EDT
From: Jonathan Rees <Rees at YALE>
To: Fahlman at CMU-20C
Cc: Common-Lisp at SU-AI
Subject: Vote on Cases

I don't know whether I rate voting status or not, but in case I do:

Speaking for the Yale's T implementation project (T is a portable
Scheme-like Lisp dialect) and for Yale's Lisp users (which includes
Maclisp, UCI Lisp, Franz Lisp, and T users), I strongly urge Common Lisp
to retain its current case-insensitive status quo, that is, option 1. of
Fahlman's recent message.  Since the reasons for this position have been
discussed at length I will make no mention of them even though I (we)
feel strongly.

T might become the base for yet another Common Lisp implementation
sometime next year (we're still mulling this one over), but if it does,
Common Lisp's decision on case won't matter much, since Common Lisp will
be implemented as an incompatible "compatibility mode" in any case
[sic].  However, compatibility with T on this issue will make life a
whole lot easier for us should we decide to go ahead with the project.

∂08-Sep-82  1228	FEINBERG at CMU-20C 	Vote on Cases 
Date: 8 September 1982  15:27-EDT (Wednesday)
From: FEINBERG at CMU-20C
To:   Scott E. Fahlman <Fahlman at CMU-20C>
Cc:   Common-Lisp at SU-AI
Subject: Vote on Cases

Howdy!
	Speaking as a Lisp user, I vote for option 1, status quo.

∂08-Sep-82  1552	MOON at SCRC-TENEX 	Array proposal 
Date: Wednesday, 8 September 1982  18:25-EDT
From: MOON at SCRC-TENEX
To: Scott E. Fahlman <Fahlman at Cmu-20c>
Cc: common-lisp at SU-AI
Subject: Array proposal

This is good.

    the type of the elements, as specified by the :TYPE keyword to
    MAKE-ARRAY (actually, I would much prefer :ELEMENT-TYPE as the keyword
    for this option, since :TYPE is confusing here).
I am strongly in favor of this.  The current :TYPE keyword to MAKE-ARRAY
means something entirely different from element-type, but I had given up
hope of getting it back after it was "stolen".  :ELEMENT-TYPE is much
clearer.

The CHAR and BIT functions can go away since they are just duplications
of AREF.  Programs for some implementations might want to define macros
that generate AREF with a THE declaration.

Making all vectors (1-D arrays) default to printing is wrong.  What's
so special about 1-dimensionality.  Arrays created by typing in the #(...)
syntax would have their printing-bit set, of course.

How do sequence-returning functions decide what to use for the printing-bit
of their result?

There is a fairly serious conflict between wanting strings with fill-pointers
to print as ordinary strings, and wanting them to print in a way that reads
as a string with a fill-pointer.  I don't have a suggestion about this,
especially since I am not a strong believer in printing things out and reading
them back in anyway.

∂08-Sep-82  2334	Kent M. Pitman <KMP at MIT-MC> 	PRINT/READ inversion   
Date: 9 September 1982 02:31-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject:  PRINT/READ inversion
To: COMMON-LISP at SU-AI

Besides printing/reading code (and how many people use #.(MAKE-ARRAY ...)
to make random constants in their code? hopefully not too many), what other
applications were there for printing/reading strings? Many things (the
LispM patch file directories come to mind as a simple example) save strings
as a way of saving objects whose only property is printed representation.
Such things are safe to print with "...". I bet people don't do much saving
of strings that have hairy parts ... eg, who would ever want to save out 
ZWEI line objects to a file and read them back in? I'm curious if anyone has
ever had need in some real program for writing out strings which had hairy 
attributes and reading them back in... Without real cases to ponder over,
it's hard to be sure I'm thinking about the right issues.

Also, it occurred to me that the syntax "..." might be a good printed
syntax for `simple' strings and that #"..." might be a good syntax for 
strings whose printed representation didn't show the whole story and therefore
should be read errors on input (eg, like ZWEI's line objects). This would
leave people with the problem of coming up with another syntax for bit
strings so maybe it's a bad idea.

-kmp

ps for those not familiar with ZWEI, the LispM's editor, it stores editor
   buffers essentially by doubly-linked chains of strings. The array leader
   of each line in the buffer contains a slot for a pointer to the line 
   object which is the previous line and another for the following line,
   so that by doing clever references to array leaders you can essentially
   cdr your way forward or backward through the buffer. The problems involved
   in writing a printer -- even one using #.(...) -- which could print out
   these objects in a truly READ-invertable form would be tremendous because
   of the odd kinds of circularities present; the structure is obviously 
   quite circular.

∂09-Sep-82  0014	Kent M. Pitman <KMP at MIT-MC> 	Array proposal    
Date: 9 September 1982 03:11-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject:  Array proposal
To: COMMON-LISP at SU-AI

    Date: Wednesday, 8 September 1982  18:25-EDT
    From: MOON at SCRC-TENEX

    ... The CHAR and BIT functions can go away since they are just duplications
    of AREF.  Programs for some implementations might want to define macros
    that generate AREF with a THE declaration...

I disagree with this. While it may be the case that you will want to make
CHAR and BIT trivially turn into just AREFs, I think they have value in terms
of self-documentation. Particularly, I would rather see:
	(DEFUN FIRSTCHAR (STRING) (CHAR STRING 1))
than
	(DEFUN FIRSTCHAR (STRING) (AREF STRING 1))
even if the two were identically efficient. Also, automatic translators to 
dialects or languages not part of Common Lisp will have a considerably easier
time if people use programs that make a visual distinction between characters
and arrays even if there is not one so that useful optimizations may be done
where appropriate. I imagine this would help out the T group considerably when
it comes time to write a Common Lisp compatibility package.

Further, I don't even know what you mean by "Programs for some 
implementations...". I thought the whole idea behind Common Lisp was that
code should port well from implementation to implementation. If people on
the LispM write code using AREF because they know it'll be fast there, then
they're throwing away useful information that would allow their code to run
faster in some other implementation. If you don't expect people to write code
for the LispM which is to be ported to machines that'll need declarations, 
then I think you're drifting from the goals fo a common lisp.

    Making all vectors (1-D arrays) default to printing is wrong.  What's
    so special about 1-dimensionality.  Arrays created by typing in the #(...)
    syntax would have their printing-bit set, of course....

I thought the idea was that vectors should be simple and effectively 
"option-free". They should not waste a lot of space storing information like
how they print. They're mostly a hack to allow implementors to write lots of
fancy optimizations. If you start hairing them up with things like print 
options, pretty soon you'll be back up to the level of arrays. I support the
idea that they should all follow some set of fixed print conventions.

∂09-Sep-82  0232	Jeffrey P. Golden <JPG at MIT-MC> 	Vote on Cases  
Date: 9 September 1982 05:29-EDT
From: Jeffrey P. Golden <JPG at MIT-MC>
Subject: Vote on Cases
To: Common-Lisp at SU-AI

I vote for option 1.  (I am just a user and peruser of the Common Lisp 
mail.)

∂09-Sep-82  1142	Scott E. Fahlman <Fahlman at Cmu-20c> 	Printing Arrays 
Date: Thursday, 9 September 1982  14:42-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: Printing Arrays


At the 8/21 meeting someone (Weinreb, maybe?) was arguing that we ought
to have "Mulit-D vectors", by which he meant arrays that would print out
in a simple readable format.  The :PRINT proposal is an attempt to deal
with this issue, but since I don't understand quite what uses were
envisioned for these things, I can't decide whether it is OK for these
things just to print in a format that displays their elements and
dimensions, but not details like whether the element-type is restricted.
If the desire is just to have an array that people can examine, and that
reads back in to something EQUALP to the original, that is easy to do;
if the applications require that the printed object reads back in and
turns into the exact same type-restricted form that we started with,
things get ugly.  I think the idea was just to have a class of arrays
that were easy to look at, and I will proceed on that assumption in
revising the array proposal -- if I'm wrong about this, somebody had
better speak up pretty soon.

-- Scott

∂09-Sep-82  1611	Martin.Griss <Griss at UTAH-20> 	Case   
Date:  9 Sep 1982 1709-MDT
From: Martin.Griss <Griss at UTAH-20>
Subject: Case
To: common-lisp at SU-AI
cc: griss at UTAH-20

I vote for Case-insensitive, as in PSL. We coerce to upper (unless a
switch is flipped).
-------

∂10-Sep-82  2233	Robert W. Kerns <RWK at SCRC-TENEX at MIT-MC> 	Re: SETF and friends [and the "right" name problem]  
Date: Saturday, 11 September 1982, 01:22-EDT
From: Robert W. Kerns <RWK at SCRC-TENEX at MIT-MC>
Subject: Re: SETF and friends [and the "right" name problem]
To: JonL at PARC-MAXC
Cc: common-lisp at SU-AI
In-reply-to: The message of 2 Sep 82 13:33-EDT from JonL at PARC-MAXC

    Date: 2 Sep 1982 10:33 PDT
    From: JonL at PARC-MAXC
    Apologies for replying so late to this one -- have been travelling for a week
    after AAAI, and *moving to a new house* -- but I want to add support to
    your comments.
Me too.  Moving, that is.  I just got to your message, and only
because it had me as a recipient directly instead of on a mailing-list.
I now live in Brighton.  (What a pain!  About two weeks shot to hell,
between looking and moving...and still no phone because of the *&↑@#↑%
phone company screwing up my order, as usual.)

So how's the house?

∂11-Sep-82  0420	DLW at MIT-MC 	Vote 
Date: Saturday, 11 September 1982  07:21-EDT
Sender: DLW at MIT-OZ
From: DLW at MIT-MC
To:   common-lisp at su-ai
Subject:Vote

Speaking for the Symbolics Common Lisp effort, and on behalf of Dave
Moon and Howard Cannon, I vote for option 1.  We feel rather strongly
about this and, like SEF, will only budge if there is very strong
opposition to this vote.
-------

∂11-Sep-82  0435	DLW at MIT-MC 	Array proposal 
Date: Saturday, 11 September 1982  07:33-EDT
Sender: DLW at MIT-OZ
From: DLW at MIT-MC
To:   Kent M. Pitman <KMP at MIT-MC>
Cc:   COMMON-LISP at SU-AI
Subject: Array proposal

	Making all vectors (1-D arrays) default to printing is wrong.  What's
	so special about 1-dimensionality.  Arrays created by typing in the #(...)
	syntax would have their printing-bit set, of course....

    I thought the idea was that vectors should be simple and effectively 
    "option-free".
No, that was last week's jargon.  In the new jargon, "vector" means
a 1-D array, whereas the simple thing you are talking about is
now called a SIMPLE array.  So, what you are really saying is that
the print-bit should be another one of those things that SIMPLE arrays
cannot hack.  SEF's proposal, on the other hand, pretty clearly
states that :PRINT is orthogonal to "all of the above" attributes,
but I don't know whether he really intended to say that or not.

Being of the Lisp Machine persuasion, I don't care a lot about
exactly which restrictions should be imposed on SIMPLE arrays
and which should not; I'm not qualified to have an opinion.
People who care about this should discuss it.
-------

∂11-Sep-82  0446	DLW at MIT-MC 	Array proposal (long msg)
Date: Saturday, 11 September 1982  07:44-EDT
Sender: DLW at MIT-OZ
From: DLW at MIT-MC
To:   Scott E. Fahlman <Fahlman at Cmu-20c>
Cc:   common-lisp at SU-AI
Subject: Array proposal (long msg)

Moon asks how sequence-returning functions decide whether
to turn on the print bit.  Actually, how to they decide
whether to put in a leader and other random attributes like
that?  The same problem came up with DEFSTRUCT long ago,
and the random :MAKE-ARRAY option was put in to fix it,
but I'd hate to see a :MAKE-ARRAY parameter added to every
sequence function unless it is necessary.
-------

∂11-Sep-82  0446	DLW at MIT-MC 	Printing Arrays
Date: Saturday, 11 September 1982  07:42-EDT
Sender: DLW at MIT-OZ
From: DLW at MIT-MC
To:   Scott E. Fahlman <Fahlman at Cmu-20c>
Cc:   common-lisp at SU-AI
Subject: Printing Arrays

I like your proposal a lot.  You seem to have cleaned up a lot of the
confusion that we left in the air after the meeting.  :ELEMENT-TYPE is
definitely the right thing, too.

My original reason for wanting printable arrays (I was only calling
them multi-D vectors to be humorous, of course) was to address
the general complaints I have often heard that arrays are second-class
citizens in Lisp because you can't play with them as easily as
you can play with lists, since they don't print.  The idea was to
allow APL-like interaction with Lisp, in accordance with GLS's
general principle that Lisp try to adopt the good ideas and
the functionality of APL.  This is a pretty vague goal, and as
such does not really help to resolve the issue.

However, we should keep in mind that the general principle that
any Lisp object should be printable in a readable way is violated
in many cases throughout the language; we don't really hold this
to be a general principle.  It is important that objects used
to represent PROGRAMS read in correctly, but anything else is
just icing on the cake.  So I don't think the the readability
of printed arrays is really a big semantic issue.

In fact, since the main thing you're worried about is whether
the printed representation has to reflect the element-type,
I should point out that the element-type is only an efficiency
issue (except for strings etc, but they already print differently)
and so it is not semantically necessary (mostly) to worry about
their preservation; it's mainly an efficiency issue.  And if
you are worried about efficiency, maybe then it is reasonable
to say that you should use some better representation for
your arrays than text that needs parsing.  (This is a somewhat
bogus argument since efficiency in saving and loading is not
the same as efficiency in computation, but I think the spirit
is right.)
-------

∂11-Sep-82  1355	STEELE at CMU-20C 	Proposal for ENDP    
Date: 11 Sep 1982 1648-EDT
From: STEELE at CMU-20C
Subject: Proposal for ENDP
To: common-lisp at SU-AI

Recall that ENDP is the newly-recommended predicate for testing for
the end of a list.  I propose the small change that ENDP take an optional
second argument, which is the list whose end you are checking for.
All this does is allow better error reporting:

(defun endp (thing &optional (list () listp))
  (cond ((consp thing) nil)
	((null thing) t)
	(listp (cerror :improperly-terminated-list
		       "The non-null atom ~S terminated the list ~S"
		       thing list)
	       t)
	(t (cerror :improperly-terminated-list
		   "The non-null atom ~S terminated a list"
		   thing))))
-------

∂11-Sep-82  1500	Glenn S. Burke <GSB at MIT-ML> 	Vote    
Date: 11 September 1982 18:01-EDT
From: Glenn S. Burke <GSB at MIT-ML>
Subject: Vote
To: common-lisp at SU-AI

I go for option 1.

As an aside (and not to be construed as an argument for this on my part)
i note that at least one place in the manual describes canonicalization
of something (other than READ, i forget what it was) as being done by
STRING-UPCASE.  Maybe it was the names after #\.  I'd have to go searching
to see.

∂11-Sep-82  1537	Kent M. Pitman <KMP at MIT-MC> 	ENDP    
Date: 11 September 1982 18:28-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject: ENDP
To: Steele at CMU-20C
cc: COMMON-LISP at SU-AI

Couldn't you just have written this?
 
(defun endp (thing &optional list)
  (cond ((consp thing) nil)
	((null thing) t)
	(t (cerror ':improperly-terminated-list
		   "The non-null atom ~S terminated a list~@[, ~S]."
		   thing list))))

In any case, I definitely do not like to see functions haired up with
all kinds of funny args that ideosyncratic things. There are zillions
of functions which have a potential for erring and if they all take args
of fun things to make the error message more readable, the language 
definition will be considerably more cluttered. I would want to understand
some theory of when it was appropriate to add such args to things and when
it wasn't before I thought it was a good idea to put this one in.

∂11-Sep-82  1649	STEELE at CMU-20C 	Proposal for ENDP    
Date: 11 Sep 1982 1648-EDT
From: STEELE at CMU-20C
Subject: Proposal for ENDP
To: common-lisp at SU-AI

Recall that ENDP is the newly-recommended predicate for testing for
the end of a list.  I propose the small change that ENDP take an optional
second argument, which is the list whose end you are checking for.
All this does is allow better error reporting:

(defun endp (thing &optional (list () listp))
  (cond ((consp thing) nil)
	((null thing) t)
	(listp (cerror :improperly-terminated-list
		       "The non-null atom ~S terminated the list ~S"
		       thing list)
	       t)
	(t (cerror :improperly-terminated-list
		   "The non-null atom ~S terminated a list"
		   thing))))
-------

∂11-Sep-82  2155	Guy.Steele at CMU-10A 	KMP's remarks about ENDP   
Date: 11 September 1982 2326-EDT (Saturday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  KMP's remarks about ENDP

KMP's remarks are well taken, and his version is certainly more concise.
I do not in fact have a good feel for when to do this in general, but
in trying to write EVAL I found myself using ENDP a lot, and felt that
it would be a lot easier to locate the bug if some context were provided;
providing this context happened always to be easy to do.  But I would
not be unhappy to omit this proposed "feature".
--Guy

∂12-Sep-82  0054	Guy.Steele at CMU-10A 	???    
Date: 11 September 1982 2317-EDT (Saturday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  ???


- - - - Begin forwarded message - - - -
Mail-From: ARPANET host CMU-20C received by CMU-10A at 11-Sep-82 23:02:40-EDT
Mail-from: ARPANET site SU-SCORE rcvd at 11-Sep-82 1558-EDT
Date: 11 Sep 1982 1222-PDT
From: Andy Freeman <CSD.FREEMAN at SU-SCORE>
Subject: let & let*
To: steele at CMU-20C

I've been looking at the let/let* semantics.  The way that I understand it,
let* is essentially a "recursive" let.  They could be defined by (although
this ignores the extended syntax for the elements of the argument list of
lets, does no error checking, and doesn't handle declarations)

(defmacro let (args &body body)
  `((lambda ,(mapcar (function (lambda (arg)
				 (cond ((atom arg) arg)
				       (t (car arg)))))
		     args)
      ,@ body)
    ,(mapcar (function (lambda (arg)
			 (cond ((atom arg) nil)
			       (t (cadr arg)))))
	     args)))

(defmacro let* (args &body body)
  (cond (args `(let (,(car args))
		 ,@ (cond ((cdr args)
			   `((let* ,(cdr args) ,@ body)))
			  (t body))))
	(t `(progn ,@ body)))).

The problem with this is that it is too sequential.  The reason for using
let* is that you want to put all of the variables in the same place, but to
write a form where there are both parallel and sequential bindings, you have
to revert to a nested form.

The only way that I can think of to handle this is to extend the semantics
of let by using markers, either separator tokens in the binding object list
or by a third element in a binding object that should be bound after the
previous element.  The separator token definitions of let/let* are on the
next page.
!
All bindings after a token (&sequential or ¶llel) are nested.
The difference between the two is that each binding after &sequential
is nested while ¶llel does them in parallel.  (Another possibility
is to make ¶llel a nop if the bindings are already being done in
parallel so that ¶llel only makes sense after a &sequential.)

I like the token syntax better.  It also can be used for do, prog,
and ALL lambda lists.  (In the latter, it only makes sense for the
&aux and &optional args, and then only for the default values.)

(defmacro let (args &body body)
  (cond ((null args) `(progn ,@ body))
	((eq (car args) '&sequential)
	 (cond ((memq (cadr args) '(&sequential ¶llel))
		(comment ignore &sequential if followed by keyword)
		`(let ,(cdr args)
		   ,@ body))
	       (t (comment do one binding then nest)
		  (setq args (cond ((atom (cadr args))
				    (cons (list (cadr args) nil) (cddr args)))
				   (t (cdr args))))
		  `((lambda (,(caar args))
		      ,@ (cond ((cdr args)
				`((let ,(cons '&sequential (cdr args))
				    ,@ body)))
			       (t body)))
		    ,(cadar args)))))
	((eq (car args) '¶llel)
	 (comment ¶llel just gets ignored)
	 '(let , (cdr args)
	    ,@ body))
	(t (do ((arg-list (mapcar (function (lambda (arg)
				    (cond ((memq arg '(&sequential ¶llel))
					   arg)
					  ((atom arg) (list arg nil))
					  (t arg))))
				  args)
			  (cdr arg-list))
		(syms nil (cons (caar arg-list) syms))
		(vals nil (cons (cadar arg-list) vals)))
	       ((or (null arg-list)
		    (memq (car arg-list) '(&sequential ¶llel)))
		`((lambda ,(nreverse syms)
		    ,@ (cond (arg-list `((let ,arg-list ,@ body)))
			     (t body)))
		  ,@ (nreverse vals)))))))

(defmacro let* (args &body body)
  `(let , (cons '&sequential args)
     ,@ body)).

-andy
-------
- - - - End forwarded message - - - -

∂12-Sep-82  0541	DLW at MIT-MC 	???  
Date: Sunday, 12 September 1982  08:40-EDT
Sender: DLW at MIT-OZ
From: DLW at MIT-MC
To:   Guy.Steele at CMU-10A
Cc:   common-lisp at SU-AI
Subject: ???

I'd like to share a fact that I discovered when working on my
compiler: LET* is not really semantically equivalent to nested LETs,
even though you might think it is (and even though it used to be
implemented as such a macro on the Lisp Machine!).  The reason
is that non-pervasive SPECIAL declarations would have their meanings
altered by such a transformation.  All implementors should understand
this problem to avoid what might otherwise be a tempting but
incorrect implementation.
-------

∂12-Sep-82  1252	Scott E. Fahlman <Fahlman at Cmu-20c> 	ENDP and LET*   
Date: Sunday, 12 September 1982  15:50-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: ENDP and LET*


I am mildly opposed to Guy's ENDP suggestion on the grounds that it is
one more damned little thing to worry about, and for some users this
could be the hair that breaks the camels brain, or whatever.  Having a
dotted list choke EVAL is a very low probability error, unless the user
is doing something where he deserves to lose, so I'd rather not make
ENDP harder to use just to deal with this rare case.

I am very strongly opposed to the proposal to add ¶llel and
&sequential to variable-binding lists.  In the rare case where the user
wants ultimate control over the order in which inits are done, let him
do it with SETQs and PSETQs or nested LET and LET*.

-- Scott

∂12-Sep-82  1252	Scott E. Fahlman <Fahlman at Cmu-20c> 	ENDP and LET*   
Date: Sunday, 12 September 1982  15:50-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: ENDP and LET*


I am mildly opposed to Guy's ENDP suggestion on the grounds that it is
one more damned little thing to worry about, and for some users this
could be the hair that breaks the camels brain, or whatever.  Having a
dotted list choke EVAL is a very low probability error, unless the user
is doing something where he deserves to lose, so I'd rather not make
ENDP harder to use just to deal with this rare case.

I am very strongly opposed to the proposal to add ¶llel and
&sequential to variable-binding lists.  In the rare case where the user
wants ultimate control over the order in which inits are done, let him
do it with SETQs and PSETQs or nested LET and LET*.

-- Scott

∂12-Sep-82  1333	MOON at SCRC-TENEX 	ENDP optional 2nd arg    
Date: Sunday, 12 September 1982  16:14-EDT
From: MOON at SCRC-TENEX
To: common-lisp at sail
Subject: ENDP optional 2nd arg

In our debugger, where the arguments to functions are always available,
and the arguments to the function that err'ed are displayed as part of
the initial error message, the extra argument to ENDP would be superfluous.
I think this is a better approach since it handles the problem generally
rather than handling one specific case that someone happened to think of
first.

∂12-Sep-82  1435	Scott E. Fahlman <Fahlman at Cmu-20c> 	Case  
Date: Sunday, 12 September 1982  17:33-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: Case


With the exception of the Berkeley folks and Masinter, the vote has so
far been unanimous for the case-insensitive status quo.  In particular,
the implementors of the following systems are on record for this option,
most with strong opinions: Symbolics, Spice Lisp, Vax Common Lisp, S1
Lisp, Dec-20 Common Lisp, T Lisp, and PSL.  As far as I am concerned,
the issue is now closed.

∂12-Sep-82  1532	UCBKIM.jkf@Berkeley 	Re: Case 
Date: 12-Sep-82 15:21:24-PDT (Sun)
From: UCBKIM.jkf@Berkeley
Subject: Re: Case
Message-Id: <8208122221.478@UCBKIM.BERKELEY.ARPA>
Received: by UCBKIM.BERKELEY.ARPA (3.193 [9/6/82]) id a00478;
	12-Sep-82 15:21:26-PDT (Sun)
Received: from UCBKIM.BERKELEY.ARPA by UCBVAX.BERKELEY.ARPA (3.197 [9/11/82]) id A19168;
	12-Sep-82 15:22:35-PDT (Sun)
To: Fahlman@Cmu-20c
Cc: common-lisp@su-ai
In-Reply-To: Your message of Sunday, 12 September 1982  17:33-EDT

  Since I brought this whole thing up, perhaps you will permit me the last
word.  I think that the outcome of the vote among the implementors is clear,
they like the environment they work in and they feel that everyone should
'enjoy' it.   I anticipated the result of this vote in my poll and the
feelings of the Unix 'users' is clear:
-------
 2) If a case-insensitive Common Lisp was the only lisp available on your
    machine would you:

    a) use it without complaint about the case-insensitivity

    b) ask the person in charge of Common Lisp at your site to add a switch
       to disable the code that maps all characters to the same case, thus
       making it possible for each user to make Common Lisp case-sensitive.
 
    a: 3,8,9,11,12,13,16,25,26,30,34,46
    b: 1,4,6,7,10,14,18,19,21,24,27,31,33,35,36,37,38,40,41,42,43,45,47,48

    summary: a: 12/36 = 33%     b: 24/36 = 67%
-------

  Should Vax common lisp ever reach the Unix world, it is clear that people
will immediately add case-sensitivity as an option.  There will then be
programs that work in Unix Common Lisp (perhaps called Truly Common Lisp),
but not in Common Lisp simply because of the case problems.  Maybe then
would be a good time to bring this issue up again.



∂12-Sep-82  1623	RPG  	Vectors versus Arrays   
To:   common-lisp at SU-AI  

Watching the progress of the Common Lisp committee on the issue
of vectors over the past year I have come to the conclusion that
things are on the verge of being out of control. There isn't an
outstanding issue with regard to vectors versus arrays that
disturbs me especially as much as the trend of things - and almost
to the extent that I would consider removing S-1 Lisp from Common Lisp.

When we first started out there were vectors and arrays; strings and bit
vectors were vectors, and we had the situation where a useful data
structure - derivable from others, though it is - had a distinct name and
a set of facts about them that a novice user could understand without too
much trouble. At last November's meeting the Symbolics crowd convinced us
that changing things were too hard for them, so strings became
1-dimensional arrays. Now, after the most recent meeting, vectors have
been canned and we are left with `quick arrays' or `simple arrays' or
something (I guess they are 1-dimensional arrays, are named `simple
arrays', and are called `vectors'?).

Of course it is trivial to understand that `vectors' are a specialization
of n-dimensional arrays, but the other day McCarthy said something that
made me wonder about the idea of generalizing too far along these lines.
He said that mathematicians proceed by inventing a perfectly simple,
understandable object and then writing it up. Invariably someone comes
along a year later and says `you weren't thinking straight; your idea is
just a special case of x.' Things go on like this until we have things
like category theory that no one can really understand, but which have the
effect of being the most general generalization of everything.

There are two questions: one regarding where the generalization about vectors
and arrays should be, and one regarding how things have gone politically.

Perhaps in terms of pure programming language theory there is nothing
wrong with making vectors a special case of arrays, even to the extent of
making vector operations macros on array operations. However, imagine
explaining to a beginner, or a clear thinker, or your grandchildren, that
to get a `vector' you really make a `simple array' with all sorts of
bizarre options that simply inform the system that you want a streamlined
data structure. Imagine what you say when they ask you why you didn't just
include vectors to begin with.

Well, you can then go on to explain the joys of generalizations, how
n-dimensional arrays are `the right thing,' and then imagine how you
answer the question:  `why, then, is the minimum maximum for n, 63?' I
guess that's 9 times easier to answer than if the minimum maximum were 7.

Clearly one can make this generalization and people can live with it. 
We could make the generalization that LIST can take some other options,
perhaps stating that we want a CDR-coded list, and it can define some
accessor functions, and some auxilliary storage, and make arrays a 
specialization of CONS cells, but that would be silly (wouldn't it??).

The point is that vectors are a useful enough concept to not need to suffer
being a specialization of something else.

The political point I will not make, but will leave to your imagination.

			-rpg-

∂12-Sep-82  1828	MOON at SCRC-TENEX 	Vectors versus Arrays    
Date: Sunday, 12 September 1982  21:23-EDT
From: MOON at SCRC-TENEX
To: Dick Gabriel <RPG at SU-AI>
Cc: common-lisp at SU-AI
Subject: Vectors versus Arrays   

I think the point here, which perhaps you don't agree with, is that
"vector" is not a useful concept to a user (why is a vector different from
a 1-dimensional array?)  It's only a useful concept to the implementor, who
thinks "vector = load the Lisp pointer into a base register and index off
of it", but "array = go call an interpretive subroutine to chase indirect
pointers", or the code-bummer, who thinks "vector = fast", "array = slow".
Removing the vector/array distinction from the guts of the language is in
much the same spirit as making the default arithmetic operators generic
across all types of numbers.

I don't think anyone from "the Symbolics crowd convinced us that changing
things were too hard for them"; our point was always that we thought it was
silly to put into a language designed in 1980 a feature that was only there
to save a few lines of code in the compiler for the VAX (and the S1), when
the language already requires declarations to achieve efficiency on those
machines.

If you have a reasonable rebuttal to this argument, I at least will listen.
It is important not to return to "four implementations going in four different
directions."

∂12-Sep-82  2022	Guy.Steele at CMU-10A 	??? (that is, LET and LET*)
Date: 12 September 1982 2323-EDT (Sunday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  ??? (that is, LET and LET*)

Indeed, we voted in November not to require LET* to be a macro for precisely
the reason DLW states: the "obvious" expansion runs afoul of declarations,
not only SPECIALs but also type declarations.
--Guy

∂12-Sep-82  2114	Guy.Steele at CMU-10A 	Re: Case    
Date: 13 September 1982 0015-EDT (Monday)
From: Guy.Steele at CMU-10A
To: UCBKIM.jkf at UCB-C70
Subject:  Re: Case
CC: common-lisp at SU-AI
In-Reply-To:  <8208122221.478@UCBKIM.BERKELEY.ARPA>

I don't want to deprive you of the last word, and you'll still get it if
you reply to this.  I an curious as to what the outcome would be of a poll
that includes this variant of your question 2:

   If a case-insensitive Common Lisp were the only lisp available on your
   machine, would you:

   a) use it, possibly with some grumbling, as a case-insensitive language?
   
   b) ask the person in charge of Common Lisp at your site to add a switch
      to disable the code that maps all characters to the same case, thus
      making it possible for each user to make Common Lisp case-sensitive,
      realizing that to take advantage of this switch would render your
      code non-portable (that is, potentially unusable at any non-Unix site,
      and even potentially unusable at any site but your own)?

Would you be willing to take a poll on this question?  (I don't insist on
it, particularly if you are certain that everyone polled before realized
the implication that I have spelled out explicitly in response b) above.)
--Guy

∂12-Sep-82  2131	Scott E. Fahlman <Fahlman at Cmu-20c> 	RPG on Vectors versus Arrays   
Date: Sunday, 12 September 1982  23:47-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: RPG on Vectors versus Arrays   


I'm sure each of us could design a better language than Common Lisp is
turning out to be, and that each of those languages would be different.
My taste is close to RPG's, I think: in general, I like primitives that
I can build with better than generalizations that I can specialize.
However, Common Lisp is politics, not art.  If we can come up with a
single language that we can all live with and use for real work, then we
will have accomplished a lot more than if we had individually gone off
an implemented N perfect Lisp systems.

When my grandchildren, if any, ask me why certain things turned out in
somewhat ugly ways, I will tell them that it is for the same reason that
slaves count as 3/5 of a person in the U.S. Constitution -- that is the
price you pay for keeping the South on board (or the North, depending).
A few such crocks are nothing to be ashamed of, as long as the language
is still something we all want to use.  Even with the recent spate of
ugly compromises, I think we're doing pretty well overall.

For the record, I too believe that Common Lisp would be a clearer and
more intuitive language if it provided a simple vector data type,
documented as such, and presented hairy multi-D arrays with fill
pointers and displacement as a kind of structure built out of these
vectors.  This is what we did in Spice Lisp, not to fit any particular
instruction set, but because it seemed obviously right, clear, and
easily maintainable.  I have always felt, and still feel, that the Lisp
Machine folks took a wrong turn very early when they decided to provide
a hairy array datatype as primary with simple vectors as a degenerate
case.

Well, we proposed that Common Lisp should uniformly do this our way,
with vectors as primary, and Symbolics refused to go along with this.  I
don't think this was an unreasonable refusal -- it would have required
an immense effort for them to convert, and most of them are now used to
their scheme and like it.  They have a big user community already,
unlike the rest of us.  So we have spent the last N months trying to
come up with a compromise whereby they could do things their way, we
could do things our way, and everything would still be portable and
non-confusing.

Unfortunately, these attempts to have it both ways led to all sorts of
confusing situations, and many of us gradually came to the conclusion
that, if we couldn't have things entirely our way, then doing things
pretty much the Lisp Machine way (with the addition of the simple-vector
hack) was the next best choice.  In my opinion, the current proposal is
slightly worse than making vectors primary, but not much worse, and it
is certainly something that I can live with.  The result in this case is
close to what Symbolics wanted all along, but I don't think this is the
result of any unreasonable political tactics on their part.  Of course,
if RPG is seriously unhappy with the current proposal, we will have to
try again.  There is always the possibility that the set of solutions
acceptable to RPG or to the S1 group does not intersect with the set
acceptable to Symbolics, and that a rift is inevitable, but let us hope
that it does not come down to that.

-- Scott

∂12-Sep-82  2043	Guy.Steele at CMU-10A 	Job change for Quux   
Date: 12 September 1982 2344-EDT (Sunday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Job change for Quux

I have accepted a position at Tartan Laboratories, Incorporated ("Bill
Wulf's company" -- notice the quote marks) beginning 1 January 1983.
For this purpose I have applied to CMU for a one-year leave of absence.
(The length of this leave is standard; it should not be construed as
positive evidence that I will definitely return to CMU after one year,
nor should this disclaimer be construed as negative evidence.)

More disclaimer:  I can not speak for Tartan Laboratories in any
official manner at this time.  Nevertheless, I think it is safe to say
that in the near term I will probably be working on PQCC-type software
for the construction of compilers for algebraic languages.  Those of you
who know me or saw the paper at the June SIGPLAN Compiler Construction
conference know that I have great interest in "mainstream" compiler
technology, motivated in part by a desire to apply such technology to
AI languages such as LISP; the S-1 compiler leans heavily on what
was learned from the BLISS-11 compiler.

I hope this new job will not take me out of the LISP community.  I'll be
on the ARPANET, and I'll be involved in IJCAI-83 and the next LISP
conference, whenever it is (in two or three years).  Also, I have
informal assurance from Wulf that there will be no problem with my
spending a few hours a week working on Common LISP, so if everyone
concerned is agreeable I will continue to edit the manual, poll for
opinions, collate issues, and so on (I predict fairly rapid convergence
now anyway, with most of the problems resolved by January).

I cannot say whether Tartan will be interested in producing LISP compilers
[disclaim, disclaim].  I think it's fair to say that they are much more
likely to do so with me than without me (or someone like me, i.e.,
a LISP person).

∂13-Sep-82  0016	RPG  	Mail duplications  
To:   common-lisp at SU-AI  
Contrary to what I assume most of you believe, the duplication of messages
is *not* due to my stupidity: it is a bug in the MAILER here combined with
flakiness of SAIL wrt the ARPANET at present.  When these failures occur
(and they sometimes occur 10 times an hour), if a COMMON-LISP message is
in progress, it is re-started - from the first person on the list. If it
is any consolation, I am first on the list, so I get more duplicated
messages than anyone.

Also, contrary to what many of you must believe, I am not sitting here
in California chuckling away at the fact you see these messages over
and over: I am working with the SAIL wizards on some sort of fix, which
apparently is less trivial than we thought.

			-rpg-

∂13-Sep-82  1133	RPG  	Reply to Moon on `Vectors versus Arrays'    
To:   common-lisp at SU-AI  
The difference to a user between a vector and an array is that an array is
a general object, with many features, and a vector is a commonly used
object with few features: in the array-is-king scheme one achieves a
vector via specialization.  An analogy can be made between arrays/vectors
and Swiss Army knives. A Swiss army knife is a fine piece of engineering;
and, having been at MIT for a while 10 years ago, I know that they are
well-loved there. However, though a keen chef might own a Swiss Army
knife, he uses his boning knife to de-bone - he could use his Swiss Army
knife via specialization. We all think of programs as programs, not as
categories with flow-of-control as mappings, and, though the latter
is correct, it is the cognitive overhead of it that makes us favor the
former over the latter.

To me the extra few lines of code in the compiler are meaningless (why
should a few extra lines bother the co-author of a 300-page compiler?); a
few extra lines of emitted code are not very relevant either if it comes
to that (it is , after all, an S-1).  Had I been concerned with saving `a
few lines of code in the compiler' you can trust that I would have spoken
up earlier about many other things.

The only point I am arguing is that the cognitive overhead of making
vectors a degenerate array *may* be too high.

			-rpg-

∂13-Sep-82  1159	Kim.fateman@Berkeley 	vectors, arrays, etc   
Date: 13 Sep 1982 11:22:46-PDT
From: Kim.fateman@Berkeley
To: common-lisp@SU-AI
Subject: vectors, arrays, etc

I believe that for many future applications, the most important type of vector
or array is one that corresponds to the data format of the Fortran or
other numerical compiler system on the same computer.  If, for example,
VAX common lisp does not have this, I believe some potential
users will be unhappy.
Whether this should be done as a primary data type or as
an optimization, is probably irrelevant if it works.  

∂13-Sep-82  1354	UCBKIM.jkf@Berkeley 	Re:  Re: Case 
Date: 13-Sep-82 13:20:25-PDT (Mon)
From: UCBKIM.jkf@Berkeley
Subject: Re:  Re: Case
Message-Id: <8208132020.20156@UCBKIM.BERKELEY.ARPA>
Received: by UCBKIM.BERKELEY.ARPA (3.193 [9/6/82]) id a20156;
	13-Sep-82 13:20:36-PDT (Mon)
Received: from UCBKIM.BERKELEY.ARPA by UCBVAX.BERKELEY.ARPA (3.198 [9/12/82])
	id A04850; 13-Sep-82 13:21:12-PDT (Mon)
To: Guy.Steele@CMU-10A
Cc: common-lisp@su-ai.fateman
In-Reply-To: Your message of 13 September 1982 1558-EDT (Monday)


  I think that most people know that if they use a feature added at their
site, then there is a good chance that their code will not be portable.  I
doubt that the results of asking your question would be much different than
the results that I got, and even if they were the same I don't think that it
would make a bit of difference to the way the Common Lisp implementors feel
on this issue. I think that we will just have to wait a few years before we
can judge the wisdom of making Common Lisp case-insensitive and ignoring the
case-sensitive crowd.





∂13-Sep-82  1607	Masinter at PARC-MAXC 	Re: Case    
Date: 13-Sep-82 11:21:15 PDT (Monday)
From: Masinter at PARC-MAXC
Subject: Re: Case
In-reply-to: Fahlman's message of Sunday, 12 September 1982  17:33-EDT
To: common-lisp at SU-AI

For the record, I don't think I voted; merely entered in a proposal.

I was trying to propose a compromise to the conflicting goals of
"programs should print out like they read in" and "we like to enter programs
in lower case".

If the majority of programmers prefer lower case to upper as a way of
reading their programs, programs should defaultly print out in lower case.
Given a case-insensitive status-quo, should the reader coerce to lower case
rather than upper?

Larry

∂13-Sep-82  1635	Kent M. Pitman <KMP at MIT-MC>
Date: 13 September 1982 19:31-EDT
From: Kent M. Pitman <KMP at MIT-MC>
To: masinter at PARC-MAXC
cc: Common-Lisp at SU-AI

It makes a difference whether coercion is to upper or lower case.
Programs have to know which to expect. Changing the direction will break
programs.  eg, all FASL files would have to be recompiled because READ
would not get called to hack the case of the symbols dumped out in them, etc.

I think the right answer is that the bulk of the existing systems
already coerce to upper case and have programs relying on that fact.
That being the case, I think it contrary to the principles of Common
Lisp to ask that this be changed. It is an arbitrary decision and will
not affect program transportability, expressive power, or whatever.

The switch proposed to allow downcasing on output should satisfy the
needs of those who like lowercase output, but its essential to the
correctness of such a switch that all implementations coerce in the same
direction and i think that uppercase is the right direction to minimize
the greatest amount of hassle.

If this were a new Lisp being designed from scratch, one might argue 
legitimately that lowercase should be the case of choice internally. I
certainly wouldn't, but people might.... but since it's a language spec
designed to make existing code more runnable, not less so, I think maintaining
the status quo as much as possible on such issues is the right thing.

∂13-Sep-82  2012	JonL at PARC-MAXC 	Re: Clarification of full funarging and spaghetti stacks
Date: 13 Sep 1982 20:09 PDT
From: JonL at PARC-MAXC
Subject: Re: Clarification of full funarging and spaghetti stacks
In-reply-to: dlw at SCRC-TENEX's message of Tuesday, 7 September 1982,
 16:57-EDT
To: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
cc: MOON at SCRC-TENEX at MIT-MC, common-lisp at SU-AI

Apologies again for being a week behind in mail (I'm trying hard to 
catch up!).  This issue of "funarging and spaghetti stacks" came up
in the context of some *private* mail  I had sent to Moon, hoping to
get his view of the matter first (unfortunately, the mailer here loused
up the "at SCRC-TENEX" part, and he didn't get the mail).
    Date: 3 Sep 1982 17:59 PDT
    From: JonL at PARC-MAXC
    Subject: Re: a protest
    In-reply-to: MOON's message of Tuesday, 31 August 1982  17:38-EDT
    To: MOON at SCRC-TENEX
    cc: JonL,Guy.Steele@CMUA,Hedrick@Rutgers
    . . . 
GLS has already replied to this one, with cc to CommonLisp, and I'm still 
not sure if there is a consensus (or even a clear understanding of terms).

Part of the confusion may be due to the new terminology in the CL
manual (admittedly, good terminology, but still new to lispers).   Anyway, 
it *appears* that the meaning of "funargs" in this context implies potential
stack-frame retention;  I think it was concern over this point ("full funargs"
with indefinite scope) that brought up the discussion over a point which may 
be called "PROG label retention" (the indefinite extension of a dynamic "GO",
namely THROW).

I like the idea of CLOSUREs having indefinite extent, and also being able to
"capture" selected special variables, as in the current LISPM.  I don't like the
idea of all environment being "closed over", which implies spaghetti.



∂13-Sep-82  2230	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	Reply to Gabriel on `Vectors versus Arrays'      
Date: Tuesday, 14 September 1982, 01:25-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Subject: Reply to Gabriel on `Vectors versus Arrays'    
To: common-lisp at SU-AI
In-reply-to: The message of 13 Sep 82 14:33-EDT from Dick Gabriel <RPG at SU-AI>

I guess this is purely a cultural difference, since my argument -against-
having vectors is really exactly the same as your argument -for- having
vectors: the alternative being argued against is too much cognitive
overhead.  I don't know why this was never brought out in the open originally.

∂14-Sep-82  1823	JonL at PARC-MAXC 	Re: `Vectors versus Arrays',  and the original compromise    
Date: 14 Sep 1982 18:23 PDT
From: JonL at PARC-MAXC
Subject: Re: `Vectors versus Arrays',  and the original compromise
In-reply-to: RPG's message of 13 Sep 1982 1133-PDT
To: Dick Gabriel <RPG at SU-AI>, Moon@mit-mc
cc: common-lisp at SU-AI

During the Nov 1981 CommonLisp meeting, the LispM folks (Symbolics, and 
RG, and RMS) were adamantly against having any datatype for "chunked" 
data other than arrays.  I thought, however, that some sort of compromise was
reached shortly afterwards, at least with the Symbolics folks, whereby VECTORs
and STRINGs would exist in CL pretty much the way they do in other lisps not
specifically intended for special purpose computers (e.g., StandardLisp, PSL,
Lisp/370, VAX/NIL etc).

It was admitted that the Lispm crowd could emulate these datatypes by some
trivial variations on their existing array mechanisms -- all that would be forced
on the Lispm crowd is some kind of type-integrity for vectors and strings, and
all that would be forced on the implementors of the other CLs would be the 
minimal amount for these two "primitive" datatypes.  Portable code ought to use
CHAR or equivalent rather than AREF on strings, but that wouldn't be required,
since all the generic operations would still work for vectors and strings.

So the questions to be asked are:
 1) How well have Lisps without fancy array facilities served their
    user community?  How well have they served the implementors
    of that lisp?   Franz and PDP10 MacLisp have only primitive
    array facilities, and most of the other mentioned lisps have nothing
    other than vectors and strings (and possibly bit vectors).   
 2) How much is the cost of requiring full-generality arrays to be
    part of the white pages?  For example, can it be assured that all
    memory management for them will be written in portable CL, and
    thus shared by all implementations?  How many different compilers
    will have to solve the "optimization" questions before the implementation
    dependent upon that compiler will run in real time?
 3) Could CL thrive with all the fancy stuff of arrays (leaders, fill pointers,
    and even multiple-dimensioning) in the yellow pages?  Could a CL
    system be reasonably built up from only the VECTOR- and STRING-
    specific operations (along with a primitive object-oriented thing, which for
    lack of a better name I'll call EXTENDs, as  in the NIL design)?  As one
    data point, I'll mention that VAX/NIL was so built, and clever things
    like Flavors were indeed built over the primitives provided.
I'd think that the carefully considered opinions of those doing implementations
on "stock" hardware should prevail, since the extra work engendered for the
special-purpose hardware folks has got to be truly trivial.

It turns out that I've moved from the "stock" camp into the "special-purpose"
camp, and thus in one sense favor the current LispM approach to index-
accessible data (one big uniform data frob, the ARRAY).   But this may
turn out to be relatively unimportant -- in talking with several sophisticated
Interlisp users, it seems that the more important issues for them are the ability 
to have arrays with user-tailorable accessing methods (I may have to remind 
you all that Interlisp doesn't even have multi-dimension arrays!), and the ability
to extend certain generic operators, like PLUS, to arrays (again, the reminder that
Interlisp currently has no standard for object-oriented programming, or for
procedural attachment).


∂14-Sep-82  1835	JonL at PARC-MAXC 	Desensitizing case-sensitivity 
Date: 14 Sep 1982 18:35 PDT
From: JonL at PARC-MAXC
Subject: Desensitizing case-sensitivity
To: Common-Lisp@su-ai

As SEF says, it looks like the issue is *nearly* unanimous now, so there's
not much need for more discussion.  Unfortunately, due to some kind of
mailer lossage, my note on the subject, dated Sep 3, didn't get delivered;
I'm reproducing it below, primarily for the benefit of comments which 
may tend to make the "*nearly* unanimous" choice more palatable.
[p.s. these points won't be covered in the final exam for the CommonLisp
 reading course, but you may get extra credit for perusing them]

Date: 3 Sep 1982 16:00 PDT
From: JonL at PARC-MAXC
Subject: Case sensitivity of CommonLisp -- Second thoughts on the modest
 proposal
To: Kim.Jkf@Berkeley
cc: CommonLisp@su-ai,franz-friends@Berkeley
In-Reply-to: Jkf's msg of 29-Aug-82 22:02:05-PDT [and subsequent msgs of
 others]

This issue is dragging on entirely too long, so I promise this to be my last
entry into mire.

It seems that two independent issues are brought up here, and confusion
between the two has led to more flaming than necessary:
  1) What is the default action of the reader -- InterLisp style (case sensitive)
     PDP10 MacLisp style (uppercasify non-escapeds), or Unix style
     (lowercasify non-escapeds). 
  2) What shall be the name of the standard "white pages" functions.  All
     upper case or all lower case.
Certainly I hope no one is still trying to throw out the reader escape
conventions, by which *any* default choice can be ignored (i.e., backslash
and vertical bar).  

I'm a little appalled that so few have seen the advantage of a case-sensitive
reader with a shift-lockable keyboard.  Having adjusted to InterLisp on the
XEROX keyboard I can honestly say that I prefer it slightly to the case-
insensitive MIT world that I came from.  In fact, oodles of InterLisp users 
seem to have no trouble typing the uppercase names of standard functions 
(and thereby being coaxed into using mostly uppercase for their own symbols)
in this case-sensitive system.

Some keyboards have a shift-lock key that is less usable than desirable;
even if we should adopt a case-sensitive reader (I think unlikely?) would
it be in any way desireable to decide such an important issue on the basis
of some keyboard manufacturer's goof?

Thus I'd prefer to bypass Masinter's "modest" proposal, agreeing with Moon
that it is a "radical" proposal, not because of wanting the default reader to be
case-sensitive (note however, that Moon strongly objects to this) but because 
of the gross switch of *historic* function names from "CAR" to "car" and so on. 
This, I'm sure, almost anyone in the non-Franz MacLisp/Lispm community
would find totally unacceptable.   I refer again to the mistake made in Multics
MacLisp, which adopted this notion, and the years of pain we had 
accommodating to it (see also Moon's commentary on this point.) 

In fact, Larry's later message of 31-Aug-82 18:51:03 PDT (Tuesday)  makes
it abundantly clear that the current (non-radical) mode of operation is 
a winner.  As he says:
   I have on more than one occasion taken someone else's Interlisp program 
   and (without very much pain) converted all of the MixedCaseIdentifiers 
   to ALLUPPERCASE before including it in the Interlisp system (in which,
   although mixed case is allowed, all standard functions are uppercase to avoid
   confusion.)
   This has been acceptable. That is: "it tells the case-sensitive folks that 
   it is OK for them to use mixed-case with sensitivity, but that if they do so, 
   their package will have to be converted before it will be accepted into
   CommonLisp."
Wouldn't there be less confusion if we adopted this as a modest proposal, namely
that "...all standard functions are uppercase to avoid confusion."

Incidentally, I view the style of the CL manual as more GLS's personal
preference about readability of manuals, rather than any inherent property
from which we can deduce an answer to the question in front of us.   I
myself would prefer UPPERCASE for white-pages function names for exactly 
the same reason, readability -- but this is an extremely small point and I'll be
happy with whatever GLS does about it.



∂15-Sep-82  0824	Guy.Steele at CMU-10A 	Case usage in CL manual    
Date: 15 September 1982 1112-EDT (Wednesday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Case usage in CL manual

I guess I didn't think too hard about the choice of case in
the manual; it was inherited from the LISP Machine manual and
the old MacLISP manual.  Perhaps MOON can lend insight here.

I happen to favor case insensitivity with internal upper-case
canonicalization because I find it very convenient to let case
distinguish input from output (what I type is lower case, what
is printed is in upper case).  I admit that others might find this
annoying.  Except for this mild preference, I suppose I could live
happily with a case-sensitive LISP provided that no one took
advantage of it.  (Using "car" and "Car" as distinct variable
names
strikes me as being on a par with using FOO and F00 as distinct
variable names -- it just isn't good practice.  That is my taste.)

In any event, suggestions for improvement of the presentation in
the CL manual are always appreciated.

∂15-Sep-82  1012	Martin.Griss <Griss at UTAH-20> 	Re: Case    
Date: 15 Sep 1982 1108-MDT
From: Martin.Griss <Griss at UTAH-20>
Subject: Re: Case
To: Masinter at PARC-MAXC, common-lisp at SU-AI
cc: Griss at UTAH-20
In-Reply-To: Your message of 13-Sep-82 1121-MDT

I currently am used to upper case coercing in PSL etc, could get used to lower
(were lower=upper in some environments...).
-------

∂15-Sep-82  1343	Jeffrey P. Golden <JPG at MIT-MC>  
Date: 15 September 1982 16:44-EDT
From: Jeffrey P. Golden <JPG at MIT-MC>
To: Masinter at PARC-MAXC
cc: common-lisp at SU-AI

   Date: 13-Sep-82 11:21:15 PDT (Monday)
   From: Masinter at PARC-MAXC
   Subject: Re: Case
   If the majority of programmers prefer lower case to upper as a way of
   reading their programs, programs should defaultly print out in lower case.
   Given a case-insensitive status-quo, should the reader coerce to lower 
   case rather than upper?
I agree with KMP's and GLS's responses to this.  
I don't know what the majority of programmers prefer, but I've found many 
programmers who prefer every which way: some lower case, some upper case, 
some like to capitalize the first letter of names, some like to 
capitalize LISP System names and quoted items but not variables.
I personally find talking about LISP easiest when code is capitalized while 
descriptions are in mixed case so it is easy to separate the two by eye.

∂15-Sep-82  1752	Scott E. Fahlman <Fahlman at Cmu-20c> 	OPTIMIZE Declaration 
Date: Wednesday, 15 September 1982  20:51-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Cc:   fahlman at CMU-20C
Subject: OPTIMIZE Declaration


At the meeting I volunteered to produce a new proposal for the OPTIMIZE
declaration.  Actually, I sent out such a proposal a couple of weeks
ago, but somehow it got lost before reaching SU-AI -- both that machine
and CMUC have been pretty flaky lately.  I did not realize that the rest
of you had not seen this proposal until a couple of days ago.
Naturally, this is the one thing I did not keep a copy of, so here is my
reconstruction.  I should say that this proposal is pretty ugly, but it
is the best that I've been able to come up with.  If anyone out there
can do better, feel free.

Guy originally proposed a format like (DECLARE (OPTIMIZE q1 q2 q3)),
where each of the q's is a quality from the set {SIZE, SPEED, SAFETY}.
(He later suggested to me that COMPILATION-SPEED would be a useful
fourth quality.)  The ordering of the qualities tells the system which
to optimize for.  The obvious problem is that you sometimes want to go
for, say, SPEED above all else, but usually you want some level of
compromise.  There is no way in this scheme to specify how strongly the
system should favor one quality over another.  We don't need a lot of
gradations for most compilers, but the simple ordering is not expressive
enough.

One possibility is to simply reserve the OPTIMIZE declaration for the
various implementations, but not to specify what is done with it.  Then
the implementor could specify in the red pages whatever declaration
scheme his compiler wants to follow.  Unfortunately, this means that
such declarations would be of no use when the code is ported to another
Common Lisp, and users would have no portable way to flag that some
function is an inner loop and should be super-fast, or whatever.  The
proposal below tries to provide a crude but adequate optimization
declaration for portable code, while still making it possible for users
to fine-tune the compiler's actions for particular implementations.

What I propose is (DECLARE (OPTIMIZE (qual1 value1) (qual2 value2) ...),
where the qualities are the four mentioned above and each is paired with
a value from 0 to 3 inclusive.  The ordering of the clauses doesn't
matter, and any quality not specified gets a default value of 1.  The
intent is that {1, 1, 1, 1} would be the compiler's normal default --
whatever set of compromises the implementor believes is appropriate for
his user community.  A setting of 0 for some value is an indication that
the associated quality is unimportant in this context and may be
discrimintaed against freely.  A setting of 2 indicates that the quality
should be favored more than normal, and a setting of 3 means to go all
out to favor that quality.  Only one quality should be raised above 1 at
any one time.

The above specification scheme is crude, but sufficiently expressive for
most needs in portable code.  A compiler implementor will have specific
decisions to make -- whether to suppress inline expansions, whether to
type-check the arguments to CAR and CDR, whether to check for overflow
on arithmetic declared to be FIXNUM, whether to run the peephole
optimizer, etc. -- and it is up to him to decide how to tie these
decisions to the above values so as to match the users expressed wishes.
These decision criteria should be spelled out in that implementation's red
pages.  For example, it might be the case that the peephole optimizer is
not run if COMPILER-SPEED > 1, that type checking for the argument to
CAR and CDR is suppressed if SPEED > SAFETY+1, etc.

A compiler may optionally provide for finer control in an
implementation-dependent way by allowing the user to set certain
compiler variables or switches via declarations.  The policies specified
by these variables would override any policies derived from the
optimization values described above.  The syntax would be as follows:

(DECLARE (COMPILER-VARIABLE implementation (var1 val1) (var2 val2) ...))

Each implementation would choose a distinct name, and the compiler would
ignore any COMPILER-VARIABLE declarations with a different
implementation name.  The red pages for an implementation would specify
what compiler variables are available and what they do.  Thus we might
have

(defun foo (x)
  (declare (compiler-variable vax (type-check-car/cdr nil))
           (compiler-variable s3600 (type-check-everything t)
	   (compiler-variable s1 (slow-down-so-the-rest-can-catch-up t)))
...)

-- Scott

∂15-Sep-82  1931	MOON at SCRC-TENEX 	OPTIMIZE Declaration
Date: Wednesday, 15 September 1982  22:01-EDT
From: MOON at SCRC-TENEX
to:   common-lisp at SU-AI
Subject: OPTIMIZE Declaration
In-reply-to: The message of 15 Sep 1982  20:51-EDT from Scott E. Fahlman <Fahlman at Cmu-20c>

Scott's suggestion for small numeric weights for the optimize dimensions
sounds good.

Shouldn't the compiler-variable declaration be conditionalized with #+ rather
than by putting an implementation name into the form?  And if that is done,
shouldn't it just be additional implementation-dependent optimize dimensions,
rather than an entirely separate declaration?

∂15-Sep-82  1952	Earl A. Killian <EAK at MIT-MC> 	OPTIMIZE Declaration  
Date: 15 September 1982 22:48-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject:  OPTIMIZE Declaration
To: MOON at SCRC-TENEX
cc: common-lisp at SU-AI

The advantage of not relying on #+ is that it forces people to
write portable declarations.  Whether this is a real advantage is
another matter.

∂15-Sep-82  2020	Scott E. Fahlman <Fahlman at Cmu-20c> 	OPTIMIZE Declaration 
Date: Wednesday, 15 September 1982  23:19-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: OPTIMIZE Declaration


It is true that #+ would do the job here, and that by using #+ we could
eliminate COMPILER-VARIABLE and just hide these things under OPTIMIZE.
(Some compiler variables have nothing to do with optimization, but
that's a minor point.)  I really hate the way #+ looks, and would like
to avoid its use in Common Lisp wherever possible, even if that means
introducing a new construct or two.  However, I recognize that this is
an irrational hangup of mine, probably the result of all the
over-conditionalized and under-documented Maclisp code I have had to
wade through in the last few years.  If most of the group wants to go
with #+ for this use, I have no real objection, though the separate
COMPILER-VARIABLE form looks a lot better to me.

∂16-Sep-82  0112	Kent M. Pitman <KMP at MIT-MC>
Date: 16 September 1982 04:12-EDT
From: Kent M. Pitman <KMP at MIT-MC>
To: FAHLMAN at CMU-20C
cc: COMMON-LISP at SU-AI

I think your fear of #+ and #- is not as irrational as you think.
I certainly share it and might as well lay out my reasons why...

#+ and #- were designed to handle the fact that various Lisp source files
wanted to be portable but in a way that would more or less free each Lisp
of knowing anything about the others. Hence, #+ and #- are designed to make
it truly invisible to programs in LispM lisp (for example) that Maclisp 
code was interleaved within it since that Maclisp code would be semantically
meaningless to any LispM program. Similarly, Maclisp didn't have to worry 
about LispM code. This was the best you could do without building into Maclisp
assumptions about how LispM was going to do things and vice versa.

Now, however, Common Lisp is trying to address portability in a different
way. It is trying to make semantics predictable from dialect to dialect.
As such, it can afford to use better mechanisms than #+ and #-.

The bad thing about #+ and #- is that they tie a tremendously useful piece
of functionality (site and feature conditionalization) to a syntax that
has no underlying structure in the language. One of the nice things that
people like about Lisp is the uniform underlying structure. There is no worry
as there would be in Algol, for example, about the x+y is a piece of syntax
which has to be treated differently than f(x,y) even though in Lisp they
share a common representation. #+ and #- are quite analagous to this, I
think, in that they are an irritating special case that is hard to carry
around internally. If you want to manipulate them, you have to create special
kinds of structures not provided in the initial language semantics. I am
worried about two things:

 * Programs that read these things. In the Programmer's Apprentice project,
   we have programs that want to read and analyze and recode functions.
   If the user has a program 
     (defun f (x) ( #+Maclisp + #-Maclisp - x 3))
   and the Apprentice reads his program on the LispM where it runs, it will
   read 
     (defun f (x) (- x 3))
   and later when it recodes it, it will have lost the #-Maclisp information.
   This is bad. I don't want to have to write my own reader to handle this.

 * Further, if I want to construct a program piece-wise in the apprentice and
   then print it out, I may want to site-conditionalize things. I don't want
   to have to have a special printer that prints out certain forms as #+x
   or #-x ... I want a form that is a structured part of the language and
   which the Lisp printer prints normally.

#+ and #- are not unique in causing this problem. #. and #, cause it, too.
However, their functionality is considerably harder to express with normal
syntax. #+ and #- have reasonably easy-to-contrive variants which work via
normal s-expressions... at least for most cases.

I won't even say #+ and #- should go away. I will, however, urge strongly
that no design decisions be made which would cause the user to want to use
#+ or #- to get a piece of functionality which can easily be gotten another
cleaner way.

The other point that I would like to make about #+ and #- while I am 
bad-mouthing them is that they are not structured. I have written code that
does things like 
  #+SFA X
  #-SFA
   #+BIGNUM
    #+MULTICS Y
    #-MULTICS Z
   #-BIGNUM
    #+TOPS-20 W
    #-TOPS-20 (ERROR "# conditional failed")
and it really scares me that there's nothing to tell me that I'm not getting
2 forms in some environments or 0 in others where usually I want exactly 1.
How many times have you wanted #+ELSE ? Gee. I really wish there were 
the equivalent s-expression conditional so I could write
  (FEATURE-CASEQ
    (SFA X)
    ((AND BIGNUM MULTICS) Y)
    (BIGNUM Z)
    (TOPS-20 W)
    (T (ERROR ...)))
or some such thing just to know I can see the structure of the expression I've
written. ... plus of course I'd have something my programs could read/print.
By the way, while this example is a contrived case, I've really seen
code much like it from time to time in the Macsyma sources. It's a real eyesore
to read.

Well, I guess that establishes my opinion on the situtation. I guess I think
#+ and #- had their place in history but the time for using them ought to pass
in Common Lisp since we can make real primitives with a clear portable meaning
to replace them.

∂16-Sep-82  0133	MOON at SCRC-TENEX 	Case usage in CL manual  
Date: Thursday, 16 September 1982  01:02-EDT
From: MOON at SCRC-TENEX
to:   common-lisp at SU-AI
Subject: Case usage in CL manual
In-reply-to: The message of 15 Sep 1982 1112-EDT () from Guy.Steele at CMU-10A

Why do the Maclisp manual and the Lisp machine manual display code in lower
case, although those systems actually use upper case?

Well, two reasons.  One is that Multics Maclisp, in keeping with the conventions
of its host operating system, is a case-sensitive Lisp with the standard case
being lower-case.  So it would have confused a lot of readers if the manual
printed code in upper-case.  The other reason is that all upper-case text tends
to look bad in printed manuals, particularly if you don't have a small-caps font.

I didn't say anything about this earlier, because I didn't want to prolong
the discussion, but I am another user who likes to see code and system
output in upper case, and comments and user input in lower case, to
distinguish them.  Multics Maclisp, supposedly a two-case system, ended up
being effectively a one-case system that never used upper case.

∂16-Sep-82  0133	MOON at SCRC-TENEX 	Hairiness of arrays 
Date: Thursday, 16 September 1982  02:23-EDT
From: MOON at SCRC-TENEX
to:   common-lisp at SU-AI
Subject: Hairiness of arrays
In-reply-to: The message of 14 Sep 1982 18:23 PDT from JonL at PARC-MAXC

Common Lisp arrays aren't necessarily all that hairy, frankly.  The Lisp
machine array leader feature was rejected as part of Common Lisp.  They
have fill pointers (when one-dimensional), but those are very simple to
implement if you don't mind having one word of overhead.  They are
multi-dimensional, but the number of dimensions in an array reference can
be detected at compile time, and surely needn't make the 1-dimensional case
less efficient.  And I think by now people know how to implement the
multi-dimensional ones.  They have a variety of packing types (word, byte,
bit), but so do vectors.

The only hairy feature Common Lisp arrays have is indirection
(displacement).  This isn't very important in Common Lisp, in my opinion,
and I don't think I ever advocated it.  It would not be a terrible idea to
flush it if that makes life substantially easier for some other
implementations.  I have yet to see a piece of code that used NSUBSTRING
and wasn't doing something wrong; NSUBSTRING is about the only Common Lisp
application for array indirection, except for something, perhaps confused,
going on with ADJUST-ARRAY-SIZE.  (The Lisp machine needs indirect arrays,
but not primarily for things that would be portable to other Common Lisp
implementations).

∂16-Sep-82  0145	MOON at SCRC-TENEX 	Hairiness of arrays 
Date: Thursday, 16 September 1982  02:23-EDT
From: MOON at SCRC-TENEX
to:   common-lisp at SU-AI
Subject: Hairiness of arrays
In-reply-to: The message of 14 Sep 1982 18:23 PDT from JonL at PARC-MAXC

Common Lisp arrays aren't necessarily all that hairy, frankly.  The Lisp
machine array leader feature was rejected as part of Common Lisp.  They
have fill pointers (when one-dimensional), but those are very simple to
implement if you don't mind having one word of overhead.  They are
multi-dimensional, but the number of dimensions in an array reference can
be detected at compile time, and surely needn't make the 1-dimensional case
less efficient.  And I think by now people know how to implement the
multi-dimensional ones.  They have a variety of packing types (word, byte,
bit), but so do vectors.

The only hairy feature Common Lisp arrays have is indirection
(displacement).  This isn't very important in Common Lisp, in my opinion,
and I don't think I ever advocated it.  It would not be a terrible idea to
flush it if that makes life substantially easier for some other
implementations.  I have yet to see a piece of code that used NSUBSTRING
and wasn't doing something wrong; NSUBSTRING is about the only Common Lisp
application for array indirection, except for something, perhaps confused,
going on with ADJUST-ARRAY-SIZE.  (The Lisp machine needs indirect arrays,
but not primarily for things that would be portable to other Common Lisp
implementations).

∂16-Sep-82  0216	JoSH <JoSH at RUTGERS> 	array hairiness 
Date: 16 Sep 1982 0513-EDT
From: JoSH <JoSH at RUTGERS>
Subject: array hairiness
To: common-lisp at SU-AI

Hear, hear!  I strongly urge flushing indirect arrays.

     --JoSH (speaking for the -20 implementation group)
-------

∂16-Sep-82  0353	DLW at MIT-MC 	Hairiness of arrays 
Date: Thursday, 16 September 1982  06:51-EDT
Sender: DLW at MIT-OZ
From: DLW at MIT-MC
To:   MOON at SCRC-TENEX
Cc:   common-lisp at SU-AI
Subject: Hairiness of arrays

I agree; array indirection could be removed from Common Lisp and
probably would not severely hurt portability.  Array indirection
was always one of those things that was most closely "computer-like",
depending on mapping one storage format on top of another storage
format.  It is a fundamentally poor thing to do because of the
non-obvious aliasing of data elements that it sets up.  It has
always been one of the hardest things to document because of this.
It requires the storage format of arrays to become apparent to
the semantics of the language, whereas taste and sense (in my own
opinion) dictate that the storage layout of an array should be no
more visible than the storage layout of a symbol or a bignum.

If people really want to keep this feature, we might consider
restricting it to its most useful case, namely mapping a 1-D array
onto a multi-D array, with the elements having exactly the same
element-types.

If this would make Common Lisp substantially easier to implement
on some machines, and make arrays seem less daunting, I'm all for it.
-------

∂16-Sep-82  0751	Masinter at PARC-MAXC 	Re: #-, #+  
Date: 16-Sep-82  7:51:11 PDT (Thursday)
From: Masinter at PARC-MAXC
Subject: Re: #-, #+
In-reply-to: KMP's message of 16 September 1982 04:12-EDT
To: COMMON-LISP at SU-AI

For what it is worth, Interlisp has settled more or less on s-expression 
conditionalizations for environment-dependent operations; that is, there
are functions (SYSTEMTYPE), (COMPILEMODE) ... which return the current
values for the system you are running in, and then one writes
(SELECTQ (SYSTEMTYPE)
	((TENEX TOPS-20) Interlisp-10 code)
	(D Interlisp-D code)
	(VAX Interlisp-VAX code)
	(J Interlisp-Jericho code)
	etc.)

Of course, the compiler can turn SYSTEMTYPE into a constant and
optimizes away all but the relevant case. This means
that there is no need for any special-purpose code walkers or whatever.

There have been a few cases where #+ and #- would have given better
control (e.g., in compiler declarations, which don't get evaluated)
but for the most part, the fact that there isn't a separate syntax is
an overriding advantage.

∂16-Sep-82  0808	Scott E. Fahlman <Fahlman at Cmu-20c> 	Array Displacement   
Date: Thursday, 16 September 1982  10:55-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: Array Displacement


I second the motion (or Nth it, I guess).  Most implementations may need
to create displaced arrays for internal uses, such as low-level graphics
support, but such uses are almost certain to be non-portable.  I would
be happy to eliminate these critters from the white pages.

-- Scott

∂16-Sep-82  1011	RPG  	Vectors versus Arrays (concluded) 
To:   common-lisp at SU-AI  
My view on vectors (reprise) is that there should at least be a series of
vector functions and a description of them in the manual, and this
description ought to be separate from the array description except for
adjacency in the manual or other cross-referencing; this satisifies
cognitive simplification (I believe). Vectors can have exactly the
semantics of 1-dimensional puppy arrays - or whatever they were called -
and implementations are free to implement vectors as these arrays, the
implementation being trivial. If other implementations care to implement
them in some other funny way for whatever reason, they may do so.
Compilers can optimize on the vector operation names, on declarations, or
on brilliant insight, as desired. I believe this largely revives the
pre-August 21 situation with some simplifications.
			-rpg-

∂16-Sep-82  1216	Earl A. Killian            <Killian at MIT-MULTICS> 	arrays 
Date:     16 September 1982 1209-pdt
From:     Earl A. Killian            <Killian at MIT-MULTICS>
Subject:  arrays
To:       Common-Lisp at SU-AI

If the LISPM people think indirect arrays are really usefuly only for
weird hacks (like accessing their bitmap), then I'm all for flushing
them from Common Lisp (just to keep things clean).  However, I'm not
sure this makes anything more efficient, and thus rejoicing on the part
of implementors may be premature.  Isn't an indirection step necessary
anyway to implement growing arrays (except on the LISPM where they can
conditionally indirect cheaply)?  I had always presumed that would be
the same indirection that implemented indirect arrays.  Am I confused?

∂16-Sep-82  1308	Guy.Steele at CMU-10A 	Indirect arrays  
Date: 16 September 1982 1557-EDT (Thursday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Indirect arrays

I can think of three uses, offhand, for indirect arrays:
(1) Making an n-d array look 1-d so you can map over it.
(2) Making an n-d array look 1-d for some other purpose.
(3) Simulating FORTRAN equivalence, mostly for embedding FOTRAN in LISP.
Perhaps (3) should not be a goal of Common LISP.  (1) and (2)
could be mostly satisfied by introducing two functions MAPARRAY
(takes a function and n arrays, which must all have the
same rank and dimensions, and produces a new array of the same
rank and dimensions), and RAVEL (produces a 1-d array with the contents
of the argument array copied into it in row-major order -- note that
this does not require that the argument array actually does store
elements in row-major order, but if it doesn't then RAVEL will
have to do some shuffling).
Indeed, stock hardware atr least will probably use indirection anyway
to be able to do ARRAY-GROW; but the interaction of ARRAY-FROW and
user-specified indirection is very tricky.

Here is a suggestion to alleviate that bad interaction: it should be
an error to shrink an array that has been indirected to.  This is
not difficult to check, if one has a bit in the array saying whether
or not it was ever indirected to.
--Guy

∂16-Sep-82  2039	Kent M. Pitman <KMP at MIT-MC> 	Portable declarations  
Date: 16 September 1982 23:39-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject:  Portable declarations
To: COMMON-LISP at SU-AI

In regards to Scott's COMPILER-VARIABLE item...

    (DECLARE (COMPILER-VARIABLE implementation (var1 val1) (var2 val2) ...))

... could people comment on the idea that anything that wanted to hack
compiler declarations would have to first macroexpand the first item 
and then check it for DECLARE-ness? This'd let you write user-code like:

    (DEFMACRO OPSYS-CASE (&BODY STUFF) ;doesn't error check much
      `(PROGN ,@(CDR (OR (ASSQ (STATUS OPSYS) STUFF)
			 (ASSQ 'T STUFF)))))
and later do
    
   (LET* ((X 3) (Y X))
     (OPSYS-CASE
       (VAX   (DECLARE (TYPE-CHECK-CAR/CDR NIL) (SPECIAL X)))
       (S3600 (DECLARE (SPECIAL X))) ;type checking in microcode
       (T     (DECLARE (SPECIAL X))) ;who knows? 
     ...stuff...)

and the system would not have to have special knowledge about the macro you've
written doing a DECLARE thing. It'd just macroexpand form1 in LET* and find
the DECLARE there and move it as appropriate.

This gives greater flexibility with respect to declarations, too, since it
allows one to write declaration-writing macros.

I haven't thought all the consequences fully, but at first pass I can see no
problems about it other than just remembering to call MACROEXPAND in the few
places where you might want to be looking for a declaration.

Thoughts?
-kmp

∂16-Sep-82  2028	Scott E. Fahlman <Fahlman at Cmu-20c> 	Revised array proposal (long)  
Date: Thursday, 16 September 1982  23:27-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: Revised array proposal (long)


Here is a revision of my array proposal, fixed up in response to some of
the feedback I've received.  See if you like it any better than the
original.  In particular, I have explictly indicated that certain
redundant forms such as MAKE-VECTOR should be retained, and I have
removed the :PRINT keyword, since I now believe that it causes more
trouble than it is worth.  A revised printing proposal appears at the
end of the document.

**********************************************************************

Arrays can be 1-D or multi-D.  All arrays can be created by MAKE-ARRAY
and can be accessed with AREF.  Storage is done via SETF of an AREF.
The term VECTOR refers to any array of exactly one dimension.
Vectors are special, in that they are also sequences, and can be
referenced by ELT.  Also, only vectors can have fill pointers.

Vectors can be specialized along several distinct axes.  The first is by
the type of the elements, as specified by the :ELEMENT-TYPE keyword to
MAKE-ARRAY.  A vector whose element-type is STRING-CHAR is referred to
as a STRING.  Strings, when they print, use the "..." syntax; they also
are the legal inputs to a family of string-functions, as defined in the
manual.  A vector whose element-type is BIT (alias (MOD 2)), is a
BIT-VECTOR.  These are special because they form the set of legal inputs
to the boolean bit-vector functions.  (We might also want to print them
in a strange way -- see below.)

Some implementations may provide a special, highly efficient
representation for simple vectors.  A simple vector is (of course) 1-D,
cannot have a fill pointer, cannot be displaced, and cannot be altered
in size after its creation.  To get a simple vector, you use the :SIMPLE
keyword to MAKE-ARRAY with a non-null value.  If there are any
conflicting options specified, an error is signalled.  If an
implementation does not support simple vectors, this keyword/value is
ignored except that the error is still signalled on inconsistent cases.

We need a new set of type specifiers for simple things: SIMPLE-VECTOR,
SIMPLE-STRING, and SIMPLE-BIT-VECTOR, with the corresponding
type-predicate functions.  Simple vectors are referenced by AREF in the
usual way, but the user may use THE or DECLARE to indicate at
compile-time that the argument is simple, with a corresponding increase
in efficiency.  Implementations that do not support simple vectors
ignore the "simple" part of these declarations.

Strings (simple or non-simple) self-eval; all other arrays cause an
error when passed to EVAL.  EQUAL descends into strings, but not
into any other arrays.  EQUALP descends into arrays of all kinds,
comparing the corresponding elements with EQUALP.  EQUALP is false
if the array dimensions are not the same, but it is not sensitive to
the element-type of the array, whether it is simple, etc.  In comparing
the dimensions of vectors, EQUALP uses the length from 0 to the fill
pointer; it does not look at any elements beyond the fill pointer.

The set of type-specifiers required for all of this is ARRAY, VECTOR,
STRING, BIT-VECTOR, SIMPLE-VECTOR, SIMPLE-STRING, SIMPLE-BIT-VECTOR.
Each of these has a corresponding type-P predicate, and each can be
specified in list from, along with the element-type and dimension(s).

MAKE-ARRAY takes the following keywords: :ELEMENT-TYPE, :INITIAL-VALUE,
:INITIAL-CONTENTS, :FILL-POINTER, and :SIMPLE.  There is still some
discussion as to whether we should retain array displacement, which
requires :DISPLACED-TO and :DISPLACED-INDEX-OFFSET.

The following functions are redundant, but should be retained for
clarity and emphasis in code: MAKE-VECTOR, MAKE-STRING, MAKE-BIT-VECTOR.
MAKE-VECTOR takes the same keywords as MAKE-ARRAY, but can only take a
single integer as the dimension argument.  MAKE-STRING and
MAKE-BIT-VECTOR are like MAKE-VECTOR, but do not take the :ELEMENT-TYPE
keyword, since the element-type is implicit.  Similarly, we should
retain the forms VREF, CHAR, and BIT, which are identical in operation
to AREF, but which declare their aray argument to be VECTOR, STRING, or
BIT-VECTOR, respectively.

If the :SIMPLE keyword is not specified to MAKE-ARRAY or related forms,
the default is NIL.  However, vectors produced by random forms such as
CONCATENATE are simple, and vectors created when the reader sees #(...)
or "..." are also simple.

As a general rule, arrays are printed in a simple format that, upon
being read back in, produces a form that is EQUALP to the original.
However, some information may be lost in the printing process:
element-type restrictions, whether a vector is simple, whether it has a
fill pointer, whether it is displaced, and the identity of any element
that lies beyond the fill pointer.  This choice was made to favor ease
of interactive use; if the user really wants to preserve in printed form
some complex data structure containing non-simple arrays, he will have
to develop his own printer.

A switch, SUPPRESS-ARRAY-PRINTING, is provided for users who have lots
of large arrays around and don't want to see them trying to print.  If
non-null, this switch causes all arrays except strings to print in a
short, non-readable form that does not include the elements:
#<array-...>.  In addition, the printing of arrays and vectors (but not
of strings) is subject to PRINLEVEL and PRINLENGTH.

Strings, simple or otherwise, print using the "..."  syntax.  Upon
read-in, the "..." syntax creates a simple string.

Bit-vectors, simple or otherwise, print using the #"101010..." syntax.
Upon read-in, this format produces a simple bit-vector.  Bit vectors do
observe SUPPRESS-ARRAY-PRINTING.

All other vectors print out using the #(...) syntax, observing
PRINLEVEL, PRINLENGTH, and SUPPRESS-ARRAY-PRINTING.  This format reads
in as a simple vector of element-type T.

All other arrays print out using the syntax #nA(...), where n is the
number of dimensions and the list is a nest of sublists n levels deep,
with the array elements at the deepest level.  This form observes
PRINLEVEL, PRINLENGTH, and SUPPRESS-ARRAY-PRINTING.  This format reads
in as an array of element-type T.

Query: I am still a bit uneasy about the funny string-like syntax for
bit vectors.  Clearly we need some way to read these in that does not
turn into a type-T vector.  An alternative might be to allow #(...) to
be a vector of element-type T, as it is now, but to take the #n(...)
syntax to mean a vector of element-type (MOD n).  A bit-vector would
then be #2(1 0 1 0...) and we would have a parallel notation available
for byte vectors, 32-bit word vectors, etc.  The use of the #n(...)
syntax to indicate the length of the vector always struck me as a bit
useless anyway.  One flaw in this scheme is that it does not extend to
multi-D arrays.  Before someone suggests it, let me say that I don't
like #nAm(...), where n is the rank and m is the element-type -- it
would be too hard to remember which number was which.  But even with
this flaw, the #n(...) syntax might be useful.

∂16-Sep-82  2049	Rodney A. Brooks <BROOKS at MIT-OZ at MIT-MC> 	Re: Revised array proposal (long)
Date: 16 Sep 1982 2345-EDT
From: Rodney A. Brooks <BROOKS at MIT-OZ at MIT-MC>
Subject: Re: Revised array proposal (long)
To: Fahlman at CMU-20C
cc: common-lisp at SU-AI
In-Reply-To: Your message of 16-Sep-82 2334-EDT

I thought the idea of keeping VECTOR, VREF etc. was that they would
be precisely for what you call the SIMPLE-VECTOR case. Having all of
ARRAYs, VECTORs and SIMPLE-s puts the cognitive overhead up above what
it was to start with. I think the types should be:
ARRAY, STRING, VECTOR and maybe STRING-VECTOR
where the latter is what you called SIMPLE-STRING. I've left out
the BIT cases because I couldn't think of any name better than BITS for
the STRING analogy.
-------

∂16-Sep-82  2051	Scott E. Fahlman <Fahlman at Cmu-20c> 	Portable declarations
Date: Thursday, 16 September 1982  23:49-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Kent M. Pitman <KMP at MIT-MC>
Cc:   COMMON-LISP at SU-AI
Subject: Portable declarations


KMP's proposed format for version-dependent declarations might not
confuse the Lisp system, but it would confuse me.  I would prefer to see
a special declaration form for handling the common case of
implementation-dependent declarations, rather than trying to make
everything infinitely powerful and general.

∂16-Sep-82  2207	Kent M. Pitman <KMP at MIT-MC>
Date: 17 September 1982 01:07-EDT
From: Kent M. Pitman <KMP at MIT-MC>
To: fahlman at CMU-20C
cc: Common-Lisp at SU-AI

I only suggested a mechanism, not a syntax. The example I gave was only to
illustrate how little the system had to support in order to leave this in
the user domain. The advantage is that it would allow users to abstract their
declarations in whatever way they felt most convenient. At some later time,
we might standardize on someone's particular suggestions after we'd had a
chance to try things out. I'm just a little concerned about prematurely
selecting too much arbitrary syntax for declarations. Declarations are the 
sort of thing that are hard to plan for because you just never know what you're
going to want to declare or on what basis you're going to want to declare it
when you talk about portable code. Suppose it later becomes interesting to
make certain kinds of declarations based on other kinds of conditions than
just operating system or site. You'd have to introduce new kinds of arbitrary
syntax, whereas my proposal by its very vagueness and generality allows for
some flexibility in expanding the things one can declare without requiring a
modification to the common-lisp spec.

∂16-Sep-82  2330	Glenn S. Burke <GSB at MIT-ML> 	array proposal    
Date: 17 September 1982 02:31-EDT
From: Glenn S. Burke <GSB at MIT-ML>
Subject: array proposal
To: common-lisp at SU-AI

I go with Rodney's interpretation, especially in terms of the
accessors.  I've been thinking about what it will take to do this in
NIL, and what it comes down to is that it is hardly worthwhile having
the specialized accessors VREF, CHAR, and BIT if they need to handle
the general cases of those types of arrays.  (This is unfortunate
because, at least in the case of anything called a string, i would
feel bad if CHAR didn't work on it.  On the other hand, with kludging,
discriminating between exactly two types, such as STRING and EXTEND,
is easier than a dispatch, but still so gross to do inline compared
to what one would get otherwise that i wouldn't do it inline.  I
experimented with this once.)

What it comes down to is that for brevity (and interfacing to a
currently stupid compiler), i am going to provide an accessing
primitive for every simple type.  If these are not provided in
common-lisp, then they will probably have "%" as the first character
of their names and be in the chartreuse pages.  This has nothing to do
with preserving the status-quo with what NIL already uses these names
for (the simple ones, as it happens), but rather the ability to
constrain the type of reference briefly, because i believe that they
will be heavily used (certainly in the NIL internals they will be).

Of course, the NIL-colored pages will also contain things like
%string-replace, %string-translate, etc., for those who want that kind
of thing.  (These are the MOVC3 and MOVTC instructions.)

∂17-Sep-82  1235	STEELE at CMU-20C 	Proposed evaluator for Common LISP (very long)
Date: 17 Sep 1982 1534-EDT
From: STEELE at CMU-20C
Subject: Proposed evaluator for Common LISP (very long)
To: common-lisp at SU-AI

There follows a several-page file containing a proposed
LISP definition of a Common LISP evaluator.  It maintains
the lexical environment as list structure, and assumes PROGV
as a primitive for accomplishing special bindings (but the use
of PROGV is hidden in a macro).
Most of the hair is in the processing of lambda-lists.
The functions tend to pass a dozen or more parameters;
loops are accomplished primarily by recursion, which may not
be tail-recursive because of the use of PROGV to establish
a special binding for one parameter before processing the next
parameter specifier.
I intend soon to send out two more versions of this evaluator;
one that uses special variables internally instead of
passing dozens of parameters, and one that is heavily bummed for
Spice LISP, and uses %BIND instead of PROGV to do special
bindings.
-----------------------------------------------------------
;;; This evaluator splits the lexical environment into four
;;; logically distinct entities:
;;;	VENV = lexical variable environment
;;;	FENV = lexical function and macro environment
;;;	BENV = block name environment
;;;	GENV = go tag environment
;;; Each environment is an a-list.  It is never the case that
;;; one can grow and another shrink simultaneously; the four
;;; parts could be united into a single a-list.  The four-part
;;; division saves consing and search time.
;;;
;;; Each entry in VENV has one of two forms: (VAR VALUE) or (VAR).
;;; The first indicates a lexical binding of VAR to VALUE, and the
;;; second indicates a special binding of VAR (implying that the
;;; special value should be used).
;;;
;;; Each entry in FENV looks like (NAME TYPE . FN), where NAME is the
;;; functional name, TYPE is either FUNCTION or MACRO, and FN is the
;;; function or macro-expansion function, respectively.  Entries of
;;; type FUNCTION are made by FLET and LABELS; those of type MACRO
;;; are made by MACROLET.
;;;
;;; Each entry in BENV looks like (NAME NIL), where NAME is the name
;;; of the block.  The NIL is there primarily so that two distinct
;;; conses will be present, namely the entry and the cdr of the entry.
;;; These are used internal as catch tags, the first for RETURN and the
;;; second for RESTART.  If the NIL has been clobbered to be INVALID,
;;; then the block has been exited, and a return to that block is an error.
;;;
;;; Each entry in GENV looks like (TAG MARKER . BODY), where TAG is
;;; a go tag, MARKER is a unique cons used as a catch tag, and BODY
;;; is the statement sequence that follows the go tag.  If the car of
;;; MARKER, normally NIL, has been clobbered to be INVALID, then
;;; the tag body has been exited, and a go to that tag is an error.

;;; An interpreted-lexical-closure contains a function (normally a
;;; lambda-expression) and the lexical environment.

(defstruct interpreted-lexical-closure function venv fenv benv genv)


;;; The EVALHOOK feature allows a user-supplied function to be called
;;; whenever a form is to be evaluated.  The presence of the lexical
;;; environment requires an extension of the feature as it is defined
;;; in MacLISP.  Here, the user hook function must accept not only
;;; the form to be evaluated, but also the components of the lexical
;;; environment; these must then be passed verbatim to EVALHOOK or
;;; *EVAL in order to perform the evaluation of the form correctly.
;;; The precise number of components should perhaps be allowed to be
;;; implementation-dependent, so it is probably best to require the
;;; user hook function to accept arguments as (FORM &REST ENV) and
;;; then to perform evaluation by (APPLY #'EVALHOOK FORM HOOKFN ENV),
;;; for example.

(defvar evalhook nil)

(defun evalhook (exp hookfn venv fenv benv genv)
  (let ((evalhook hookfn)) (%eval exp venv fenv benv genv)))

(defun eval (exp)
  (%eval exp nil nil nil nil))

;;; *EVAL looks useless here, but does more complex things
;;; in alternative implementations of this evaluator.

(defun *eval (exp venv fenv benv genv)
  (%eval exp venv fenv benv genv))
!
;;; Function names beginning with "%" are intended to be internal
;;; and not defined in the Common LISP white pages.

;;; %EVAL is the main evaluation function.

(defun %eval (exp venv fenv benv genv)
  (if (not (null evalhook))
      (funcall evalhook exp venv fenv benv genv)
      (typecase exp
	;; A symbol is first looked up in the lexical variable environment.
	(symbol (let ((slot (assoc exp venv)))
		  (cond ((and (not (null slot)) (not (null (cdr slot))))
			 (cadr slot))
			((boundp exp) (symbol-value exp))
			(t (cerror :unbound-variable
				   "The symbol ~S has no value"
				   exp)))))
	;; Numbers, string, and characters self-evaluate.
	((or number string character) exp)
	;; Conses require elaborate treatment based on the car.
	(cons (typecase (car exp)
		;; A symbol is first looked up in the lexical function environment.
		;; This lookup is cheap if the environment is empty, a common case.
		(symbol
		 (let ((fn (car exp)))
		   (loop (let ((slot (assoc fn fenv)))
			   (unless (null slot)
			     (return (case (cadr slot)
				       (macro (%eval (%macroexpand
						      (cddr slot)
						      (if (eq fn (car exp))
							  exp
							  (cons fn (cdr exp))))))
				       (function (%apply (cddr slot)
							 (%evlis (cdr exp) venv fenv benv genv)))
				       (t <implementation-error>)))))
			 ;; If not in lexical function environment,
			 ;;  try the definition cell of the symbol.
			 (when (fboundp fn)
			   (return (cond ((special-form-p fn)
					  (%invoke-special-form
					   fn (cdr exp) venv fenv benv genv))
					 ((macro-p fn)
					  (%eval (%macroexpand
						  (get-macro-function (symbol-function fn))
						  (if (eq fn (car exp))
						      exp
						      (cons fn (cdr exp))))
						 venv fenv benv genv))
					 (t (%apply (symbol-function fn)
						    (%evlis (cdr exp) venv fenv benv genv))))))
			 (setq fn
			       (cerror :undefined-function
				       "The symbol ~S has no function definition"
				       fn))
			 (unless (symbolp fn)
			   (return (%apply fn (%evlis (cdr exp) venv fenv benv genv)))))))
		;; A cons in function position must be a lambda-expression.
		;; Note that the construction of a lexical closure is avoided here.
		(cons (%lambda-apply (car exp) venv fenv benv genv
				     (%evlis (cdr exp) venv fenv benv genv)))
		(t (%eval (cerror :invalid-form
				  "Cannot evaluate the form ~S: function position has invalid type ~S"
				  exp (type-of (car exp)))
			  venv fenv benv genv))))
	(t (%eval (cerror :invalid-form
			  "Cannot evaluate the form ~S: invalid type ~S"
			  exp (type-of exp))
		  venv fenv benv genv)))))
!
;;; Given a list of forms, evaluate each and return a list of results.

(defun %evlis (forms venv fenv benv genv)
  (mapcar #'(lambda (form) (%eval form venv fenv benv genv)) forms))

;;; Given a list of forms, evaluate each, discarding the results of
;;; all but the last, and returning all results from the last.

(defun %evprogn (body venv fenv benv genv)
  (if (endp body) nil
      (do ((b body (cdr b)))
	  ((endp (cdr b))
	   (%eval (car b) venv fenv benv genv))
	(%eval (car b) venv fenv benv genv))))

;;; APPLY takes a function, a number of single arguments, and finally
;;; a list of all remaining arguments.  The following song and dance
;;; attempts to construct efficiently a list of all the arguments.

(defun apply (fn firstarg &rest args*)
  (%apply fn
	  (cond ((null args*) firstarg)
		((null (cdr args*)) (cons firstarg (car args*)))
		(t (do ((x args* (cdr x))
			(z (cddr args*) (cdr z)))
		       ((null z)
			(rplacd x (cadr x))
			(cons firstarg (car args*))))))))
!
;;; %APPLY does the real work of applying a function to a list of arguments.

(defun %apply (fn args)
  (typecase fn
    ;; For closures over dynamic variables, complex magic is required.
    (closure (with-closure-bindings-in-effect fn
					      (%apply (closure-function fn) args)))
    ;; For a compiled function, an implementation-dependent "spread"
    ;;  operation and invocation is required.
    (compiled-function (%invoke-compiled-function fn args))
    ;; The same goes for a compiled closure over lexical variables.
    (compiled-lexical-closure (%invoke-compiled-lexical-closure fn args))
    ;; The treatment of interpreted lexical closures is elucidated fully here.
    (interpreted-lexical-closure
     (%lambda-apply (interpreted-lexical-closure-function fn)
		    (interpreted-lexical-closure-venv fn)
		    (interpreted-lexical-closure-fenv fn)
		    (interpreted-lexical-closure-benv fn)
		    (interpreted-lexical-closure-genv fn)
		    args))
    ;; For a symbol, the function definition is used, if it is a function.
    (symbol (%apply (cond ((not (fboundp fn))
			   (cerror :undefined-function
				   "The symbol ~S has no function definition"
				   fn))
			  ((special-form-p fn)
			   (cerror :invalid-function
				   "The symbol ~S cannot be applied: it names a special form"
				   fn))
			  ((macro-p fn)
			   (cerror :invalid-function
				   "The symbol ~S cannot be applied: it names a macro"
				   fn))
			  (t (symbol-function fn)))
		    args))
    ;; Applying a raw lambda-expression uses the null lexical environment.
    (cons (if (eq (car fn) 'lambda)
	      (%lambda-apply fn nil nil nil nil args)
	      (%apply (cerror :invalid-function
			      "~S is not a valid function"
			      fn)
		      args)))
    (t (%apply (cerror :invalid function
		       "~S has an invalid type ~S for a function"
		       fn (type-of fn))
	       args))))
!
;;; %LAMBDA-APPLY is the hairy part, that takes care of applying
;;; a lambda-expression in a given lexical environment to given
;;; arguments.  The complexity arises primarily from the processing
;;; of the parameter list.
;;;
;;; If at any point the lambda-expression is found to be malformed
;;; (typically because of an invalid parameter list), or if the list
;;; of arguments is not suitable for the lambda-expression, a correctable
;;; error is signalled; correction causes a throw to be performed to
;;; the tag %LAMBDA-APPLY-RETRY, passing back a (possibly new)
;;; lambda-expression and a (possibly new) list of arguments.
;;; The application is then retried.  If the new lambda-expression
;;; is not really a lambda-expression, then %APPLY is used instead of
;;; %LAMBDA-APPLY.
;;;
;;; In this evaluator, PROGV is used to instantiate variable bindings
;;; (though its use is embedded with a macro called %BIND-VAR).
;;; The throw that precedes a retry will cause special bindings to
;;; be popped before the retry.

(defun %lambda-apply (lexp venv fenv benv genv args)
  (multiple-value-bind (newfn newargs)
		       (catch '%lambda-apply-retry
			 (return-from %lambda-apply
			   (%lambda-apply-1 lexp venv fenv benv genv args)))
    (if (and (consp lexp) (eq (car lexp) 'lambda))
	(%lambda-apply newfn venv fenv benv genv newargs)
	(%apply newfn newargs))))

;;; Calling this function will unwind all special variables
;;; and cause FN to be applied to ARGS in the original lexical
;;; and dynamic environment in force when %LAMBDA-APPLY was called.

(defun %lambda-apply-retry (fn args)
  (throw '%lambda-apply-retry (values fn args)))

;;; This function is convenient when the lambda expression is found
;;; to be malformed.  REASON should be a string explaining the problem.

(defun %bad-lambda-exp (lexp oldargs reason)
  (%lambda-apply-retry
   (cerror :invalid-function
	   "Improperly formed lambda-expression ~S: ~A"
	   lexp reason)
   oldargs))

;;; (%BIND-VAR VAR VALUE . BODY) evaluates VAR to produce a symbol name
;;; and VALUE to produce a value.  If VAR is determined to have been
;;; declared special (as indicated by the current binding of the variable
;;; SPECIALS, which should be a list of symbols, or by a SPECIAL property),
;;; then a special binding is established using PROGV.  Otherwise an
;;; entry is pushed onto the a-list presumed to be in the variable VENV.

(defmacro %bind-var (var value &body body)
  `(let ((var ,var) (value ,value))
     (let ((specp (or (member var specials) (get var 'special))))
       (progv (and specp (list var)) (and specp (list value))
	 (push (if specp (list var) (list var value)) venv)
	 ,@body))))

;;; %LAMBDA-KEYWORD-P is true iff X (which must be a symbol)
;;; has a name beginning with an ampersand.

(defun %lambda-keyword-p (x)
  (char= #\& (char 0 (symbol-pname x))))
!
;;; %LAMBDA-APPLY-1 is responsible for verifying that LEXP is
;;; a lambda-expression, for extracting a list of all variables
;;; declared SPECIAL in DECLARE forms, and for finding the
;;; body that follows any DECLARE forms.

(defun %lambda-apply-1 (lexp venv fenv benv genv args)
  (cond ((or (not (consp lexp))
	     (not (eq (car lexp) 'lambda))
	     (atom (cdr lexp))
	     (not (listp (cadr lexp))))
	 (%bad-lambda-exp lexp args "improper lambda or lambda-list"))
	(t (do ((body (cddr lexp) (cdr body))
		(specials '()))
	       ((or (endp body)
		    (not (listp (car body)))
		    (not (eq (caar body) 'declare)))
		(%bind-required lexp args (cadr lexp) venv fenv benv genv venv args specials body))
	     (dolist (decl (cdar body))
	       (when (eq (car decl) 'special)
		 (setq specials
		       (if (null specials)		;Avoid consing
			   (cdar decl)
			   (append (cdar decl) specials)))))))))

;;; %BIND-REQUIRED handles the pairing of arguments to required parameters.
;;; Error checking is performed for too few or too many arguments.
;;; If a lambda-list keyword is found, %TRY-OPTIONAL is called.
;;; Here, as elsewhere, if the binding process terminates satisfactorily
;;; then the body is evaluated using %EVPROGN in the newly constructed
;;; dynamic and lexical environment.

(defun %bind-required (lexp oldargs varlist oldvenv fenv benv genv venv args specials body)
  (cond ((endp varlist)
	 (if (null args)
	     (%evprogn body venv fenv benv genv)
	     (%lambda-apply-retry lexp
				  (cerror :too-many-arguments
					  "Too many arguments for function ~S: ~S"
					  lexp args))))
	((not (symbolp (car varlist)))
	 (%bad-lambda-exp lexp oldargs "required parameter name not a symbol"))
	((%lambda-keyword-p (car varlist))
	 (%try-optional lexp oldargs varlist oldvenv fenv benv genv venv args specials body))
	((null args)
	 (%lambda-apply-retry lexp 
			      (cerror :too-few-arguments
				      "Too few arguments for function ~S: ~S"
				      lexp oldargs)))
	  (t (%bind-var (car varlist) (car args)
			(%bind-required lexp oldargs varlist oldvenv fenv benv genv venv (cdr args) specials body)))))
!
;;; %TRY-OPTIONAL determines whether the lambda-list keyword &OPTIONAL
;;; has been found.  If so, optional parameters are processed; if not,
;;; the buck is passed to %TRY-REST.

(defun %try-optional (lexp oldargs varlist oldvenv fenv benv genv venv args specials body)
  (cond ((eq (car varlist) '&optional)
	 (%bind-optional lexp oldargs (cdr varlist) oldvenv fenv benv genv venv args specials body))
	(t (%try-rest lexp oldargs varlist oldvenv fenv benv genv venv args specials body))))

;;; %BIND-OPTIONAL determines whether the parameter list is exhausted.
;;; If not, it parses the next specifier.

(defun %bind-optional (lexp oldargs varlist oldvenv fenv benv genv venv args specials body)
  (cond ((endp varlist)
	 (if (null args)
	     (%evprogn body venv fenv benv genv)
	     (%lambda-apply-retry lexp
				  (cerror :too-many-arguments
					  "Too many arguments for function ~S: ~S"
					  lexp args))))
	(t (let ((varspec (car varlist)))
	     (cond ((symbolp varspec)
		    (if (%lambda-keyword-p varspec)
			(%try-rest lexp oldargs varlist oldvenv fenv benv genv venv args specials body)
			(%process-optional lexp oldargs varlist oldvenv fenv benv genv
					   venv args specials body varspec nil nil)))
		   ((and (consp varspec)
			 (symbolp (car varspec))
			 (listp (cdr varspec))
			 (or (endp (cddr varspec))
			     (and (symbolp (caddr varspec))
				  (not (endp (caddr varspec)))
				  (endp (cdddr varspec)))))
		    (%process-optional lexp oldargs varlist oldvenv fenv benv genv
				       venv args specials body
				       (car varspec)
				       (cadr varspec)
				       (caddr varspec)))
		   (t (%bad-lambda-exp lexp oldargs "malformed optional parameter specifier")))))))

;;; %PROCESS-OPTIONAL takes care of binding the parameter,
;;; and also the supplied-p variable, if any.

(defun %process-optional (lexp oldargs varlist oldvenv fenv benv genv venv args specials body var init varp)
  (let ((value (if (null args) (%eval init venv fenv benv genv) (car args))))
    (%bind-var var value
      (if varp
	  (%bind-var varp (not (null args))
	    (%bind-optional lexp oldargs varlist oldvenv fenv benv genv venv args specials body))
	  (%bind-optional lexp oldargs varlist oldvenv fenv benv genv venv args specials body)))))
!
;;; %TRY-REST determines whether the lambda-list keyword &REST
;;; has been found.  If so, the rest parameter is processed;
;;; if not, the buck is passed to %TRY-KEY, after a check for
;;; too many arguments.

(defun %try-rest (lexp oldargs varlist oldvenv fenv benv genv venv args specials body)
  (cond ((eq (car varlist) '&rest)
	 (%bind-rest lexp oldargs (cdr varlist) oldvenv fenv benv genv venv args specials body))
	((and (not (eq (car varlist) '&key))
	      (not (null args)))
	 (%lambda-apply-retry lexp
			      (cerror :too-many-arguments
				      "Too many arguments for function ~S: ~S"
				      lexp args)))
	(t (%try-key lexp oldargs varlist oldvenv fenv benv genv venv args specials body))))

;;; %BIND-REST ensures that there is a parameter specifier for
;;; the &REST parameter, binds it, and then evaluates the body or
;;; calls %TRY-KEY.

(defun %bind-rest (lexp oldargs varlist oldvenv fenv benv genv venv args specials body)
  (cond ((or (endp varlist)
	     (not (symbolp (car varlist))))
	 (%bad-lambda-exp lexp oldargs "missing rest parameter specifier"))
	(t (%bind-var (car varlist) args
	     (cond ((endp (cdr varlist))
		    (%evprogn body venv fenv benv genv))
		   ((and (symbolp (cadr varlist))
			 (%lambda-keyword-p (cadr varlist)))
		    (%try-key lexp oldargs varlist oldvenv fenv benv genv venv args specials body))
		   (t (%bad-lambda-exp lexp oldargs "malformed after rest parameter specifier")))))))
!
;;; %TRY-KEY determines whether the lambda-list keyword &KEY
;;; has been found.  If so, keyword parameters are processed;
;;; if not, the buck is passed to %TRY-AUX.

(defun %try-key (lexp oldargs varlist oldvenv fenv benv genv venv args specials body)
  (cond ((eq (car varlist) '&key)
	 (%bind-key lexp oldargs (cdr varlist) oldvenv fenv benv genv venv args specials body nil))
	(t (%try-aux lexp oldargs varlist oldvenv fenv benv genv venv specials body))))

;;; %BIND-KEY determines whether the parameter list is exhausted.
;;; If not, it parses the next specifier.

(defun %bind-key (lexp oldargs varlist oldvenv fenv benv genv venv args specials body keys)
  (cond ((endp varlist)
	 ;; Optional error check for bad keywords.
	 (do ((a args (cddr a)))
	     ((endp args))
	   (unless (member (car a) keys)
	     (cerror :unexpected-keyword
		     "Keyword not expected by function ~S: ~S"
		     lexp (car a))))
	 (%evprogn body venv fenv benv genv))
	(t (let ((varspec (car varlist)))
	     (cond ((symbolp varspec)
		    (if (%lambda-keyword-p varspec)
			(cond ((not (eq varspec '&allow-other-keywords))
			       (%try-aux lexp oldargs varlist oldvenv fenv benv genv venv specials body))
			      ((endp (cdr varlist))
			       (%evprogn body venv fenv benv genv))
			      ((%lambda-keyword-p (cadr varlist))
			       (%try-aux lexp oldargs (cdr varlist) oldvenv fenv benv genv venv specials body))
			      (t (%bad-lambda-exp lexp oldargs "invalid after &ALLOW-OTHER-KEYWORDS")))
			(%process-key lexp oldargs varlist oldvenv fenv benv genv
				      venv args specials body keys
				      (intern varspec keyword-package)
				      varspec nil nil)))
		   ((and (consp varspec)
			 (or (symbolp (car varspec))
			     (and (consp (car varspec))
				  (consp (cdar varspec))
				  (symbolp (cadar varspec))
				  (endp (cddar varspec))))
			 (listp (cdr varspec))
			 (or (endp (cddr varspec))
			     (and (symbolp (caddr varspec))
				  (not (endp (caddr varspec)))
				  (endp (cdddr varspec)))))
		    (%process-key lexp oldargs varlist oldvenv fenv benv genv
				  venv args specials body keys
				  (if (consp (car varspec))
				      (caar varspec)
				      (intern (car varspec) keyword-package))
				  (if (consp (car varspec))
				      (cadar varspec)
				      (car varspec))
				  (cadr varspec)
				  (caddr varspec)))
		   (t (%bad-lambda-exp lexp oldargs "malformed keyword parameter specifier")))))))

;;; %PROCESS-KEY takes care of binding the parameter,
;;; and also the supplied-p variable, if any.

(defun %process-key (lexp oldargs varlist oldvenv fenv benv genv venv args specials body keys kwd var init varp)
  (let ((value (do ((a args (cddr a)))
		   ((endp a) (%eval init venv fenv benv genv))
		 (when (eq (car a) kwd)
		   (return (cadr a))))))
    (%bind-var var value
      (if varp
	  (%bind-var varp (not (null args))
	    (%bind-key lexp oldargs varlist oldvenv fenv benv genv venv args specials body (cons kwd keys)))
	  (%bind-key lexp oldargs varlist oldvenv fenv benv genv venv args specials body (cons kwd keys))))))
!
;;; %TRY-AUX determines whether the keyword &AUX
;;; has been found.  If so, auxiliary variables are processed;
;;; if not, an error is signalled.

(defun %try-aux (lexp oldargs varlist oldvenv fenv benv genv venv specials body)
  (cond ((eq (car varlist) '&aux)
	 (%bind-aux lexp oldargs (cdr varlist) oldvenv fenv benv genv venv specials body))
	(t (%bad-lambda-exp lexp oldargs "unknown or misplaced lambda-list keyword"))))

;;; %BIND-AUX determines whether the parameter list is exhausted.
;;; If not, it parses the next specifier.

(defun %bind-aux (lexp oldargs varlist oldvenv fenv benv genv venv specials body)
  (cond ((endp varlist)
	 (%evprogn body venv fenv benv genv))
	(t (let ((varspec (car varlist)))
	     (cond ((symbolp varspec)
		    (if (%lambda-keyword-p varspec)
			(%bad-lambda-exp lexp oldargs "unknown or misplaced lambda-list keyword")
			(%process-aux lexp oldargs varlist oldvenv fenv benv genv
				      venv specials body varspec nil)))
		   ((and (consp varspec)
			 (symbolp (car varspec))
			 (listp (cdr varspec))
			 (endp (cddr varspec)))
		    (%process-aux lexp oldargs varlist oldvenv fenv benv genv
				       venv specials body
				       (car varspec)
				       (cadr varspec)))
		   (t (%bad-lambda-exp lexp oldargs "malformed aux variable specifier")))))))

;;; %PROCESS-AUX takes care of binding the auxiliary variable.

(defun %process-aux (lexp oldargs varlist oldvenv fenv benv genv venv specials body var init)
    (%bind-var var (and init (%eval init venv fenv benv genv))
       (%bind-aux lexp oldargs varlist oldvenv fenv benv genv venv specials body)))
!
;;; Definitions for various special forms and macros.

(defspec quote (obj) (venv fenv benv genv) obj)

(defspec function (fn) (venv fenv benv genv)
  (cond ((consp fn)
	 (cond ((eq (car fn) 'lambda)
		(make-interpreted-closure :function fn :venv venv :fenv fenv :benv benv :genv genv))
	       (t (cerror ???))))
	((symbolp fn)
	 (loop (let ((slot (assoc fn fenv)))
		 (unless (null slot)
		   (case (cadr slot)
		     (macro (cerror ???))
		     (function (return (cddr slot)))
		     (t <implementation-error>))))
	       (when (fboundp fn)
		 (cond ((or (special-form-p fn) (macro-p fn))
			(cerror ???))
		       (t (return (symbol-function fn)))))
	       (setq fn (cerror :undefined-function
				"The symbol ~S has no function definition"
				fn))
	       (unless (symbolp fn) (return fn))))
	(t (cerror ???))))

(defspec if (pred con &optional alt) (venv fenv benv genv)
  (if (%eval pred venv fenv benv genv)
      (%eval con venv fenv benv genv)
      (%eval alt venv fenv benv genv)))

;;; The BLOCK construct provides a PROGN with a named contour around it.
;;; It is interpreted by first putting an entry onto BENV, consisting
;;; of a 2-list of the name and NIL.  This provides two unique conses
;;; for use as catch tags.  Then the body is executed.
;;; If a RETURN or RESTART is interpreted, a throw occurs.  If the BLOCK
;;; construct is exited for any reason (including falling off the end, which
;;; retu rns the results of evaluating the last form in the body), the NIL in
;;; the entry is clobbered to be INVALID, to indicate that that particular
;;; entry is no longer valid for RETURN or RESTART.

(defspec block (name &body body) (venv fenv benv genv)
  (let ((slot (list name nil)))	;Use slot for return, (cdr slot) for restart
    (unwind-protect
     (catch slot
       (loop (catch (cdr slot)
	       (%evprogn body venv fenv (cons slot benv) genv))))
     (rplaca (cdr slot) 'invalid)))) 

(defspec return (form) (venv fenv benv genv)
  (let ((slot (assoc nil benv)))
    (cond ((null slot) (ferror ???<unseen-block-name>))
	  ((eq (cadr slot) 'invalid) (ferror ???<block-name-no-longer-valid>))
	  (t (throw slot (%eval form venv fenv benv genv))))))

(defspec return-from (name form) (venv fenv benv genv)
  (let ((slot (assoc name benv)))
    (cond ((null slot) (ferror ???<unseen-block-name>))
	  ((eq (cadr slot) 'invalid) (ferror ???<block-name-no-longer-valid>))
	  (t (throw slot (%eval form venv fenv benv genv))))))

(defspec restart (form) (venv fenv benv genv)
  (let ((slot (assoc nil benv)))
    (cond ((null slot) (ferror ???<unseen-block-name>))
	  ((eq (cadr slot) 'invalid) (ferror ???<block-name-no-longer-valid>))
	  (t (throw (cdr slot) (%eval form venv fenv benv genv))))))

(defspec restart-from (name form) (venv fenv benv genv)
  (let ((slot (assoc name benv)))
    (cond ((null slot) (ferror ???<unseen-block-name>))
	  ((eq (cadr slot) 'invalid) (ferror ???<block-name-no-longer-valid>))
	  (t (throw (cdr slot) (%eval form venv fenv benv genv))))))
!
(defmacro prog (vars &rest body)
  `(let ,vars (block nil (tagbody ,@ body))))

;;; The TAGBODY construct provides a body with GO tags in it.
;;; It is interpreted by first putting one entry onto GENV for
;;; every tag in the body; doing this ahead of time saves searching
;;; at GO time.  A unique cons whose car is NIL is constructed for
;;; use as a unique catch tag.  Then the body is executed.
;;; If a GO is interpreted, a throw occurs, sending as the thrown
;;; value the point in the body after the relevant tag.
;;; If the TAGBODY construct is exited for any reason (including
;;; falling off the end, which produces the value NIL), the car of
;;; the unique marker is clobbered to be INVALID, to indicate that
;;; tags associated with that marker are no longer valid.

(defspec tagbody (&rest body) (venv fenv benv genv)
  (do ((b body (cdr b))
       (marker (list nil)))
      ((endp p)
       (block exit
	 (unwind-protect
	  (loop (setq body
		      (catch marker
			(do ((b body (cdr b)))
			    ((endp b) (return-from exit nil))
			  (unless (atom (car b))
			    (%eval (car b) venv fenv benv genv))))))
	  (rplaca marker 'invalid))))
    (when (atom (car b))
      (push (list* (car b) marker (cdr b)) genv))))

(defspec go (tag) (venv fenv benv genv)
  (let ((slot (assoc tag genv)))
    (cond ((null slot) (ferror ???<unseen-go-tag>))
	  ((eq (caadr slot) 'invalid) (ferror ???<go-tag-no-longer-valid>))
	  (t (throw (cadr slot) (cddr slot))))))
-------

Simple Switch Proposal
∂17-Sep-82  1318	Scott E. Fahlman <Fahlman at Cmu-20c> 	Revised array proposal (long)  
Date: Thursday, 16 September 1982  23:27-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: Revised array proposal (long)


Here is a revision of my array proposal, fixed up in response to some of
the feedback I've received.  See if you like it any better than the
original.  In particular, I have explictly indicated that certain
redundant forms such as MAKE-VECTOR should be retained, and I have
removed the :PRINT keyword, since I now believe that it causes more
trouble than it is worth.  A revised printing proposal appears at the
end of the document.

**********************************************************************

Arrays can be 1-D or multi-D.  All arrays can be created by MAKE-ARRAY
and can be accessed with AREF.  Storage is done via SETF of an AREF.
The term VECTOR refers to any array of exactly one dimension.
Vectors are special, in that they are also sequences, and can be
referenced by ELT.  Also, only vectors can have fill pointers.

Vectors can be specialized along several distinct axes.  The first is by
the type of the elements, as specified by the :ELEMENT-TYPE keyword to
MAKE-ARRAY.  A vector whose element-type is STRING-CHAR is referred to
as a STRING.  Strings, when they print, use the "..." syntax; they also
are the legal inputs to a family of string-functions, as defined in the
manual.  A vector whose element-type is BIT (alias (MOD 2)), is a
BIT-VECTOR.  These are special because they form the set of legal inputs
to the boolean bit-vector functions.  (We might also want to print them
in a strange way -- see below.)

Some implementations may provide a special, highly efficient
representation for simple vectors.  A simple vector is (of course) 1-D,
cannot have a fill pointer, cannot be displaced, and cannot be altered
in size after its creation.  To get a simple vector, you use the :SIMPLE
keyword to MAKE-ARRAY with a non-null value.  If there are any
conflicting options specified, an error is signalled.  If an
implementation does not support simple vectors, this keyword/value is
ignored except that the error is still signalled on inconsistent cases.

We need a new set of type specifiers for simple things: SIMPLE-VECTOR,
SIMPLE-STRING, and SIMPLE-BIT-VECTOR, with the corresponding
type-predicate functions.  Simple vectors are referenced by AREF in the
usual way, but the user may use THE or DECLARE to indicate at
compile-time that the argument is simple, with a corresponding increase
in efficiency.  Implementations that do not support simple vectors
ignore the "simple" part of these declarations.

Strings (simple or non-simple) self-eval; all other arrays cause an
error when passed to EVAL.  EQUAL descends into strings, but not
into any other arrays.  EQUALP descends into arrays of all kinds,
comparing the corresponding elements with EQUALP.  EQUALP is false
if the array dimensions are not the same, but it is not sensitive to
the element-type of the array, whether it is simple, etc.  In comparing
the dimensions of vectors, EQUALP uses the length from 0 to the fill
pointer; it does not look at any elements beyond the fill pointer.

The set of type-specifiers required for all of this is ARRAY, VECTOR,
STRING, BIT-VECTOR, SIMPLE-VECTOR, SIMPLE-STRING, SIMPLE-BIT-VECTOR.
Each of these has a corresponding type-P predicate, and each can be
specified in list from, along with the element-type and dimension(s).

MAKE-ARRAY takes the following keywords: :ELEMENT-TYPE, :INITIAL-VALUE,
:INITIAL-CONTENTS, :FILL-POINTER, and :SIMPLE.  There is still some
discussion as to whether we should retain array displacement, which
requires :DISPLACED-TO and :DISPLACED-INDEX-OFFSET.

The following functions are redundant, but should be retained for
clarity and emphasis in code: MAKE-VECTOR, MAKE-STRING, MAKE-BIT-VECTOR.
MAKE-VECTOR takes the same keywords as MAKE-ARRAY, but can only take a
single integer as the dimension argument.  MAKE-STRING and
MAKE-BIT-VECTOR are like MAKE-VECTOR, but do not take the :ELEMENT-TYPE
keyword, since the element-type is implicit.  Similarly, we should
retain the forms VREF, CHAR, and BIT, which are identical in operation
to AREF, but which declare their aray argument to be VECTOR, STRING, or
BIT-VECTOR, respectively.

If the :SIMPLE keyword is not specified to MAKE-ARRAY or related forms,
the default is NIL.  However, vectors produced by random forms such as
CONCATENATE are simple, and vectors created when the reader sees #(...)
or "..." are also simple.

As a general rule, arrays are printed in a simple format that, upon
being read back in, produces a form that is EQUALP to the original.
However, some information may be lost in the printing process:
element-type restrictions, whether a vector is simple, whether it has a
fill pointer, whether it is displaced, and the identity of any element
that lies beyond the fill pointer.  This choice was made to favor ease
of interactive use; if the user really wants to preserve in printed form
some complex data structure containing non-simple arrays, he will have
to develop his own printer.

A switch, SUPPRESS-ARRAY-PRINTING, is provided for users who have lots
of large arrays around and don't want to see them trying to print.  If
non-null, this switch causes all arrays except strings to print in a
short, non-readable form that does not include the elements:
#<array-...>.  In addition, the printing of arrays and vectors (but not
of strings) is subject to PRINLEVEL and PRINLENGTH.

Strings, simple or otherwise, print using the "..."  syntax.  Upon
read-in, the "..." syntax creates a simple string.

Bit-vectors, simple or otherwise, print using the #"101010..." syntax.
Upon read-in, this format produces a simple bit-vector.  Bit vectors do
observe SUPPRESS-ARRAY-PRINTING.

All other vectors print out using the #(...) syntax, observing
PRINLEVEL, PRINLENGTH, and SUPPRESS-ARRAY-PRINTING.  This format reads
in as a simple vector of element-type T.

All other arrays print out using the syntax #nA(...), where n is the
number of dimensions and the list is a nest of sublists n levels deep,
with the array elements at the deepest level.  This form observes
PRINLEVEL, PRINLENGTH, and SUPPRESS-ARRAY-PRINTING.  This format reads
in as an array of element-type T.

Query: I am still a bit uneasy about the funny string-like syntax for
bit vectors.  Clearly we need some way to read these in that does not
turn into a type-T vector.  An alternative might be to allow #(...) to
be a vector of element-type T, as it is now, but to take the #n(...)
syntax to mean a vector of element-type (MOD n).  A bit-vector would
then be #2(1 0 1 0...) and we would have a parallel notation available
for byte vectors, 32-bit word vectors, etc.  The use of the #n(...)
syntax to indicate the length of the vector always struck me as a bit
useless anyway.  One flaw in this scheme is that it does not extend to
multi-D arrays.  Before someone suggests it, let me say that I don't
like #nAm(...), where n is the rank and m is the element-type -- it
would be too hard to remember which number was which.  But even with
this flaw, the #n(...) syntax might be useful.

∂17-Sep-82  1336	Rodney A. Brooks <BROOKS at MIT-OZ at MIT-MC> 	Re: Revised array proposal (long)
Date: 16 Sep 1982 2345-EDT
From: Rodney A. Brooks <BROOKS at MIT-OZ at MIT-MC>
Subject: Re: Revised array proposal (long)
To: Fahlman at CMU-20C
cc: common-lisp at SU-AI
In-Reply-To: Your message of 16-Sep-82 2334-EDT

I thought the idea of keeping VECTOR, VREF etc. was that they would
be precisely for what you call the SIMPLE-VECTOR case. Having all of
ARRAYs, VECTORs and SIMPLE-s puts the cognitive overhead up above what
it was to start with. I think the types should be:
ARRAY, STRING, VECTOR and maybe STRING-VECTOR
where the latter is what you called SIMPLE-STRING. I've left out
the BIT cases because I couldn't think of any name better than BITS for
the STRING analogy.
-------

∂17-Sep-82  1451	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	arrays
Date: Friday, 17 September 1982, 17:17-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: arrays
To: Killian at MIT-MULTICS, Common-Lisp at SU-AI
In-reply-to: The message of 16 Sep 82 15:09-EDT from Earl A.Killian <Killian at MIT-MULTICS>

I'm not sure about anyone else, but I didn't say anything about making
them more efficient.  I was only trying to talk about language semantics
and perceived complexity.  I actually don't think it's hard to implement
them such that if you don't use them, they don't slow anything else down
noticably.

∂17-Sep-82  1450	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	Revised array proposal (long)  
Date: Friday, 17 September 1982, 17:15-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: Revised array proposal (long)
To: Fahlman at Cmu-20c, common-lisp at SU-AI
In-reply-to: The message of 16 Sep 82 23:27-EDT from Scott E.Fahlman <Fahlman at Cmu-20c>

This mostly looks very good.  I am still hesitant about having arrays
print out their entire contents when printed, but I don't have any
particular counterproposal or complaint to make right now.

∂17-Sep-82  1741	David.Dill at CMU-10A (L170DD60) 	array proposal  
Date: 17 September 1982 2021-EDT (Friday)
From: David.Dill at CMU-10A (L170DD60)
To: common-lisp at SU-AI
Subject:  array proposal
Message-Id: <17Sep82 202113 DD60@CMU-10A>

The fact that EQUAL behaves differently on strings than on other vectors
seems to me to be a little ugly.  Why not have EQUAL descend into all
sequences?  This allows you to avoid descending into multi-d arrays, if
that's bad for some reason, and causes the correct behavior for lists
and strings, and (in my opinion) more useful behavior for other sequences,
since you can always use EQL for the current effect on non-string vectors.



∂17-Sep-82  1803	Kent M. Pitman <KMP at MIT-MC> 	EQUAL descending arrays
Date: 17 September 1982 21:03-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject: EQUAL descending arrays
To: David.Dill at CMU-10A
cc: COMMON-LISP at SU-AI

This idea was discussed quite a bit at the last common lisp meeting. It
seems to me that we addressed the issue you bring up specifically. We discussed
letting EQUALP do that and we also discussed introducing a primitive like
T's ALIKE? function. I wasn't taking notes. Perhaps someone that was could 
briefly summarize so that we don't have to re-enact that whole discussion.

∂17-Sep-82  1831	David.Dill at CMU-10A (L170DD60) 	equal descending into SEQUENCES
Date: 17 September 1982 2129-EDT (Friday)
From: David.Dill at CMU-10A (L170DD60)
To: common-lisp at SU-AI
Subject:  equal descending into SEQUENCES
Message-Id: <17Sep82 212907 DD60@CMU-10A>

As KMP notes, this was discussed at the last common lisp meeting.

I didn't take notes, but I think the issue wasn't resolved because the
discussion got sidetracked into what EQUALP should do and whether we should
adopt T's ALIKE? function. Both of these issues seem to me to be orthogonal
to the original question.

∂18-Sep-82  0225	Richard M. Stallman <RMS at MIT-AI> 	Portable declarations  
Date: 18 September 1982 05:27-EDT
From: Richard M. Stallman <RMS at MIT-AI>
Subject: Portable declarations
To: KMP at MIT-MC
cc: COMMON-LISP at SU-AI

I agree that allowing macros to expand into declarations is a very
good idea.

∂18-Sep-82  1521	Earl A. Killian <EAK at MIT-MC> 	Proposed evaluator for Common LISP -- declarations  
Date: 18 September 1982 18:21-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject:  Proposed evaluator for Common LISP -- declarations
To: STEELE at CMU-20C
cc: common-lisp at SU-AI

Your proposed evaluator ignores type declarations.  I really
think it should store them and check them.

Also, it occurs to me that there is no way to declare the type of
a static variable, which is a loss.  Without type specific
arithmetic, it's going to be necessary to declare things much
more often in Common Lisp for efficiency, so the declaration
facility must be complete.  There should also be a way to
discover the declared type of a special, if any.

Since evaluating a declare is supposed to signal an error,
shouldn't %lambda-apply-1 gobble up any declares that it sees,
rather than passing them off to %bind-required?

Also, it ought to error on illegal declarations, rather than
ignoring them.

∂18-Sep-82  1546	Earl A. Killian <EAK at MIT-MC> 	Proposed evaluator for Common LISP -- declarations  
Date: 18 September 1982 18:46-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject:  Proposed evaluator for Common LISP -- declarations
To: STEELE at CMU-20C
cc: common-lisp at SU-AI

Excuse me for one part of the last message; of course there's a
way to declare the type of a dynamic variable -- just put the
declaration at top-level (I don't know where my mind was at the
time).  However there still needs to be a way to ask the type of
a variable (dynamic or otherwise).  This is not the same as the
type of the value of the variable, of course.  This will be
useful for the evaluator as well as for debugging.

How about a special form called var-type, such that
	(let ((x 1))
	     (declare (type integer x))
	     (var-type x))
returns INTEGER.

Also, it's annoying that a common case will be

(declare (type integer foo)
	 (special foo))

Is there any support for a declaration that combines the above?

Also, I understand that Maclisp compatibility may be important
enough to warrent the hack whereby (declare (special x)) has
different semantics at top-level than when imbedded, but should
it's use be encouraged in new programs?  Perhaps there ought to
be (declare (super-special x)) for new programs?  Or perhaps
pervasive-special?  Unfortunately, it is a loss to make this name
that long.

∂18-Sep-82  1555	Earl A. Killian <EAK at MIT-MC> 	declarations
Date: 18 September 1982 18:55-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject:  declarations
To: STEELE at CMU-20C
cc: common-lisp at SU-AI

On the question of (declare (special x)) at top-level vs.
imbedded, perhaps the right thing to do is to leave the top-level
one named "special", and invent a new name for the imbedded one,
such as dynamic.  (declare (dynamic x)) at top-level would be
nop because it's non-pervasive like var declarations in general,
whereas (declare (special x)) would be prevasive as before.  Not
clear whether (declare (special x)) should be defined imbedded or
not.

The Maclisp unspecial doesn't exist; is this intentional?

∂18-Sep-82  2117	MOON at SCRC-TENEX 	Declarations from macros 
Date: Sunday, 19 September 1982  00:02-EDT
From: MOON at SCRC-TENEX
To:   COMMON-LISP at SU-AI
Subject: Declarations from macros
In-reply-to: The message of 16 Sep 1982 23:39-EDT from Kent M. Pitman <KMP at MIT-MC>

Kent's idea of having macros able to expand into declarations is probably
a good idea.  Places that look for declarations are probably all going to
expand macros eventually anyway.

Documentation strings should be treated the same as declarations.  It might even
be all right to replace "naked" documentation strings with declarations.

What about a macro that wants to expand into both a declaration and some code,
perhaps initializations of variables?  Or do such macros always take a body and
expand into a LET?

∂18-Sep-82  2122	MOON at SCRC-TENEX 	Indirect arrays
Date: Sunday, 19 September 1982  00:07-EDT
From: MOON at SCRC-TENEX
To:   common-lisp at SU-AI
Subject: Indirect arrays
In-reply-to: The message of 16 Sep 1982 1557-EDT () from Guy.Steele at CMU-10A

There are other uses for indirect arrays besides those in Guy's message.  For
what they are worth:

(4) Making a subsequence of an array manipulable as an array (a first-class
object rather than a triplet of array,start,end), while retaining sharing
of side-effects on the elements, when not implementing FORTRAN nor PL/I.

(5) Making an array of n-bit bytes look like an array of m-bit bytes.

For the n-dimension/1-dimension case, rather than making specific kludges
for the particular cases that happened to be thought of first (MAPARRAY and
RAVEL), I would prefer to put in a general AREF-like function for accessing
n-dimensional arrays as if they were 1-dimensional, and its corresponding
SETF-er.  These already exist in the Lisp machine, but I won't tell you their
names, since the names are gross.

∂18-Sep-82  2207	Richard M. Stallman <RMS at MIT-AI> 	Printing Arrays   
Date: 19 September 1982 01:08-EDT
From: Richard M. Stallman <RMS at MIT-AI>
Subject: Printing Arrays
To: Fahlman at CMU-20C
cc: common-lisp at SU-AI

Rather than have each array say whether to print its elements,
let the user decide after he has seen the arrays print in a brief format
that he wants to see them again in a verbose format.

Have variables array-prinlevel and array-prinlength with default
values 1 and 4.  This means that only one level of array prints its
elements, and only the first four elements at that.  Any array within
those four elements is printed without mentioning any elements.

Then have a function of no arguments which increments those variables
suitably and returns the value of *.  Suppose it is RPA (reprint
array).  It might increment array-prinlevel by 1 and array-prinlength
by 100.  After you see a value that was or included an array, just do
(RPA) and you get to see it again with more detail.

Meanwhile, programs doing explicit printing can use the same variables
to control exactly what goes on.

∂18-Sep-82  2310	Richard M. Stallman <RMS at MIT-AI> 	case    
Date: 19 September 1982 01:55-EDT
From: Richard M. Stallman <RMS at MIT-AI>
Subject: case
To: common-lisp at SU-AI

I really do not want to have to go through all the code of the Lisp
machine system and convert everything to lower case.  Really really.

There is a considerable community of users who like all their code in
upper case (except for comments and strings), and the editor has
a special feature to make it easy for them (Electric Shift Lock Mode).
Most of the system is written in this style.

∂19-Sep-82  0007	MOON at SCRC-TENEX 	Printing Arrays
Date: Sunday, 19 September 1982  02:52-EDT
From: MOON at SCRC-TENEX
To:   common-lisp at SU-AI
Subject: Printing Arrays
In-reply-to: The message of 19 Sep 1982 01:08-EDT from Richard M. Stallman <RMS at MIT-AI>

This is pretty reasonable.  If the extreme cases are provided for by
suitable values of array-prinlevel and array-prinlength (0 and infinity,
with NIL accepted as a synonym for infinity), we can satisfy everyone.

∂19-Sep-82  0032	Kent M. Pitman <KMP at MIT-MC> 	Minor changes to proposed reader syntax    
Date: 19 September 1982 03:33-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject:  Minor changes to proposed reader syntax
To: COMMON-LISP at SU-AI

I propose that bit vectors be read by #*nnnn rather than #"nnnn" since it's
a waste of a good matchfix operator "..." to use on something with so 
deterministic and endpoint. #*nnnn has its end clearly delimited by anything
not a base-2 digit.

I further propose extending #*nnn to allow #m*nnn meaning that the bit 
vector referred to is m times the number of digits and filled with the bits 
given by nnn in radix 2↑m. So that #3*35 would be the same as #*011101 -- 
ie, the default radix would be 2 for bit vectors. This might be handy for 
people doing bit vectors with 4bit bytes so they could write #4*A3 meaning
#*10100011. It is somewhat symmetric with #nR. I'm amenable to arguments that
one should write radix rather than bitesize as in #8*35 and #16*A3. Also,
the choice of * as opposed to something else is completely arbitrary. NIL
used to use #* for something, I don't know if it still does.  Maybe they'd
prefer another character like underbar or something. I don't have any set
feeling for that, I just don't want to waste "..."'s expressiveness on 
something so simple as bits.

The reason I bring this up is that I found what I think is a really good use
of #"...". Following in the line of reasoning that `... should be like '...
when no comma is present, suppose #"..." were like "..." when no tilde was
present, but fully defined by:

    (defun read-sharpsign-doublequote (instream char)
      (uninch char instream)
      (format nil (read instream)))

so that something calling READ could not tell if the user had written, for
example, #"ABC~%DEF" or "ABC
DEF".

This #"..." has applications to indenting code nicely without requiring runtime
calls to FORMAT. Here's a case under discussion:

(defun f (x)
  "This is a doc string which looks nice only when indented
   but which has the problem that later when I do (GET-DOC 'F) I
   find that the second and third line of the string retain the indentation."
  x)

Could be re-written as

(defun f (x)
  #"This is a documentation string which does not share that bug~@
    and which is indented nicely."
  x)

This also makes it possible to consider Moon's last remark about putting the
doc string in the comment because it could be then indented nicely as in

(defun f (x)
  (declare (documentation #"This is a documentation string which is ~
			     indented quite a lot, but which still~@
			    looks nice later on when retrieved because ~
			     it is reformatted nicely at read time."))
  x)

This note does not mean to make any proposals related to declare or 
documentation strings. I was only using this as an exmaple. Here's another
example that shows how worthwhile it is -- I used to write code from time
to time that said:

(defun g (x) (h #.(format nil "This is a string to type out~%on two lines.")))

Sometimes I'd even leave out the #. and let the string get recomputed 
at runtime. But now I could just write:

(defun g (x) (h #"This is a string to type out~%on two lines."))

Does anyone buy this?

∂19-Sep-82  0038	Kent M. Pitman <KMP at MIT-MC>
Date: 19 September 1982 03:38-EDT
From: Kent M. Pitman <KMP at MIT-MC>
To: Moon at SCRC-TENEX
cc: COMMON-LISP at SU-AI

    Date: Sunday, 19 September 1982  00:02-EDT
    From: MOON at SCRC-TENEX

    ... What about a macro that wants to expand into both a declaration and
    some code, perhaps initializations of variables?  Or do such macros
    always take a body and expand into a LET?
-----
I thought some about this. Basically, inits were the only things I could think
of that had any business being in the expansion with declarations. But most
things that let declarations happen are of one of two classes -- applicative
(eg, LAMBDA) or compositional (eg, LET). In the former case, inits would just 
clobber some incoming value as in 
 (LAMBDA (X) (SETQ X 3) ...)
which seems silly. In the latter case, they'd be unneeded because rather than 
 (LET (X) (INIT-FIXNUM X 3) ...)
you'd want to write 
 (LET ((X 3)) (DECLARE-FIXNUM X) ...).
I'm willing to say that macros in this position must expand into 
either declaration or code but not both. This also saves you from people who
write useful macros that are not usable in arbitrary places because they do
gratuitous declarations that force them to go at the head of a lambda contour.

∂19-Sep-82  1216	Guy.Steele at CMU-10A 	Reply to msg by ALAN about PROG 
Date: 19 September 1982 1322-EDT (Sunday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Reply to msg by ALAN about PROG

    
    Have you considered that instead of:
    
    (defmacro prog (vars &rest body)
      `(let ,vars (block nil (tagbody ,@body))))
    
    We might actually want:
    
    (defmacro prog (vars &rest body)
      `(block nil (let ,vars (tagbody ,@body))))
    
    The question being what does (prog ((a (return))) ...) mean?
    
That is a good question.  I did it the other way because I didn't
want (RESTART) to flush the variable bindings.  Obviously what
we do about RESTART will affect this.

∂19-Sep-82  1549	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	Minor changes to proposed reader syntax
Date: Sunday, 19 September 1982, 18:47-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Subject: Minor changes to proposed reader syntax
To: COMMON-LISP at SU-AI
Cc: bsg at SCRC-TENEX at MIT-MC, acw at SCRC-TENEX at MIT-MC,
    jek at SCRC-TENEX at MIT-MC
In-reply-to: The message of 19 Sep 82 03:33-EDT from Kent M.Pitman <KMP at MIT-MC>,
             The message of 14 Sep 82 01:05-EDT from David A. Moon <Moon at SCRC-TENEX>,
             The message of 14 Sep 82 09:25-EDT from Bernard S Greenberg <BSG at SCRC-TENEX>,
             The message of 13 Sep 82 19:13-EDT from Daniel L. Weinreb <dlw at SCRC-TENEX>,
             The message of 13 Sep 82 18:15-EDT from Allan C. Wechsler <ACW at SCRC-TENEX>

I agree with KMP that bit vectors should not use #", even though I think
his proposed alternate use for #" is grotesque and definitely should not
be adopted.  #* would be fine for bit-vectors.  It should terminate on any
token-separator; finding an undelimited constituent character that is not
a valid digit should be an error.

Some of the reasons why I don't like the #" proposal:

- it only deals with carriage returns, not with any other special characters
you might want to insert into a string.  You cannot access FORMAT's ~C operator
since you cannot supply any FORMAT arguments.

- it uses up the ~ character in a way that may not be obvious.

- it's mere syntatic sugar for what you can do easily enough with #. anyway.

I'm also surprised that KMP, who in an earlier message advocated eliminating
syntax that disappears on read-in (so that he doesn't have to write a special
reader for his programmer's-assistant system), is advocating this.  Oh well,
I'm not always consistent either.

∂19-Sep-82  1645	Kent M. Pitman <KMP at MIT-MC>
Date: 19 September 1982 19:45-EDT
From: Kent M. Pitman <KMP at MIT-MC>
To: MOON at MIT-MC
cc: Common-Lisp at SU-AI

Naturally I know #"..." only handles the FORMAT subset which takes no args
and is only shorthand for #.(FORMAT NIL string). I just think that's a 
worthwhile piece of shorthand which I've had need for frequently. I often
sit around trying to think up shorter strings just because the one I want
doesn't fit in the space allotted. #.(FORMAT NIL ...) is just too clumsy.

My earlier comments about losing information on READ had to do with losing 
semantic information, not syntactic information. The information loss on 
reading (defun f () #"This is a return value~%with two lines") is on par
with the information loss of reading (defun f () #o177) in a base-10 reader,
or reading (defun f () '((a . (b c)) (b . (c d)))). No semantic content is 
lost, only syntactic sugar. Whereas, reading something like
(defun f (x) #+maclisp (g x) (h x)) on the LispM loses semantic content
that cannot be recovered. Hence, I don't consider my views on these two issues
to be inconsistent.

I am completely baffled by your remark about using up the ~ char in a way
that may not be obvious. No one should use #"..." if they don't want to put
~'s in their string. Those who do, will know what to expect.

∂19-Sep-82  1655	Kent M. Pitman <KMP at MIT-MC>
Date: 19 September 1982 19:45-EDT
From: Kent M. Pitman <KMP at MIT-MC>
To: MOON at MIT-MC
cc: Common-Lisp at SU-AI

Naturally I know #"..." only handles the FORMAT subset which takes no args
and is only shorthand for #.(FORMAT NIL string). I just think that's a 
worthwhile piece of shorthand which I've had need for frequently. I often
sit around trying to think up shorter strings just because the one I want
doesn't fit in the space allotted. #.(FORMAT NIL ...) is just too clumsy.

My earlier comments about losing information on READ had to do with losing 
semantic information, not syntactic information. The information loss on 
reading (defun f () #"This is a return value~%with two lines") is on par
with the information loss of reading (defun f () #o177) in a base-10 reader,
or reading (defun f () '((a . (b c)) (b . (c d)))). No semantic content is 
lost, only syntactic sugar. Whereas, reading something like
(defun f (x) #+maclisp (g x) (h x)) on the LispM loses semantic content
that cannot be recovered. Hence, I don't consider my views on these two issues
to be inconsistent.

I am completely baffled by your remark about using up the ~ char in a way
that may not be obvious. No one should use #"..." if they don't want to put
~'s in their string. Those who do, will know what to expect.

∂19-Sep-82  1905	Richard M. Stallman <RMS at MIT-OZ at MIT-MC> 	MEMBER and ASSOC vs EQL
Date: Sunday, 19 September 1982, 02:25-EDT
From: Richard M. Stallman <RMS at MIT-OZ at MIT-MC>
Subject: MEMBER and ASSOC vs EQL
To: common-lisp at su-ai

I claim that it would be a mistake to change MEMBER and ASSOC to use EQL.

Absolutely every single time that MEMBER or ASSOC is used in the Lisp
machine system, and probably most uses in user code, they are used
specifically because they user wanted to compare lists.  (Where numbers
are being compared, they are usually fixnums, so the user would still
use MEMQ or ASSQ.)

If MEMBER and ASSOC were changed to use EQL, every single use of them
would cease to work and have to be changed.  One might as well delete
these functions as change them so grossly, since their primary reason
for existence is compatibility with ancient tradition.

In fact, it would be better to eliminate them entirely from Common Lisp.
Then users would not actually need to change their code (since the Lisp
machine would still support them) unless the code was to be made portable.
In that case, they would change the code to use a generic sequence function,
which is merely what they would have to do anyway if MEMBER and ASSOC
are changed.

∂19-Sep-82  1934	Scott E. Fahlman <Fahlman at Cmu-20c> 	Minor changes to proposed reader syntax  
Date: Sunday, 19 September 1982  22:31-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Common-Lisp at SU-AI
Subject: Minor changes to proposed reader syntax


I basically agree with MOON on the #" business.  I like #* (or whatever)
better than *" for bit-vectors, but oppose KMP's alternative proposal
for the #" syntax.  FORMAT is tolerable in its place, but we sure don't
want it to spread.  I also oppose the suggestion that #n* be used to
specify a radix for reading the digits of the bit-vector.  This is
certain to cause confusion due to the digits being read left to right --
this was all beaten to death once before.

-- Scott

∂19-Sep-82  2219	RMS at MIT-MC  
Date: Monday, 20 September 1982  01:14-EDT
Sender: RMS at MIT-OZ
From: RMS at MIT-MC
To:   common-lisp at sail

It is fine with me if Common Lisp has only named BLOCK, and not named
PROG or named DO.  But named PROG and named DO exist on the Lisp
machine, and I think it would be a hassle for me and the users to get
rid of them.

This disagreement is no problem by itself; those constructs can still
exist and not be part of Common Lisp.  But I do not want to see other
changes "required" for Common Lisp which would screw up the handling
of named PROG and DO, or be incompatible with their existence.

For example, saying that RETURN is supposed to ignore named blocks
would force a choice between two unpleasant alternatives:
1) named PROG makes two blocks, a named one and an unnamed, which
makes it unanalogous to BLOCK, or
2) many uses of RETURN must be changed.

Can't we please keep down the number of changes that are not VITAL?
I will have to implement every last one of them, and I have lots of
work to do as it is.  Adding a new feature is not very hard,
de-advertising an old feature from the manual is not very hard,
but changing what an existing feature does is a real pain.
-------

∂19-Sep-82  2246	Alan Bawden <ALAN at MIT-MC> 	RETURN in BLOCK and PROG 
Date: 20 September 1982 01:46-EDT
From: Alan Bawden <ALAN at MIT-MC>
Subject:  RETURN in BLOCK and PROG
To: common-lisp at SU-AI

    Date: Monday, 20 September 1982  01:14-EDT
    From: RMS

    For example, saying that RETURN is supposed to ignore named blocks
    would force a choice between two unpleasant alternatives:
    1) named PROG makes two blocks, a named one and an unnamed, which
    makes it unanalogous to BLOCK, or
    2) many uses of RETURN must be changed.

Alternative 1) is what I had in mind when I made the proposal.  The idea is to
flush the misfeature of LispMachine Lisp where you cannot write a macro that
intoduces a block named FOO without also introducing a block named NIL which
then keeps the user from using RETURN inside the macro's body.  Thus I regard
it as a win that named PROG and BLOCK will not be analogous.

This doesn't shaft the LispMachine users in the littlest bit since named PROG
and named DO continue to function in exactly the same way as they always have.

∂20-Sep-82  0654	DLW at MIT-MC 	Proposed evaluator for Common LISP -- declarations
Date: Monday, 20 September 1982  09:49-EDT
Sender: DLW at MIT-OZ
From: DLW at MIT-MC
To:   Earl A. Killian <EAK at MIT-MC>
Cc:   common-lisp at SU-AI, STEELE at CMU-20C
Subject: Proposed evaluator for Common LISP -- declarations

There should NOT be a way to ask the type of a variable!
CL is not a typed-variable language.  Implementations
are EXPLCITLY ALLOWED to ignore all type declarations.
-------

∂20-Sep-82  0654	DLW at MIT-MC 	Minor changes to proposed reader syntax 
Date: Monday, 20 September 1982  09:48-EDT
Sender: DLW at MIT-OZ
From: DLW at MIT-MC
to:   COMMON-LISP at SU-AI
Subject: Minor changes to proposed reader syntax

KMP's reasoning about use of #* sounds right; I think we should do
this.  As for the use of #"...", I don't think I like it.  However,
we might consider the simpler proposal that has been discussed, namely
that #"..." reads a string but ignores all whitespace after returns.
This is less ugly than having those tildes and addresses the basic
problem, without making any of the problems that Moon pointed out.
-------

∂20-Sep-82  1031	RPG  	Vectors and Arrays (Reprise) 
To:   common-lisp at SU-AI  
	I would like to concur with Brooks and KMP that perhaps
what ought to be called `vectors' should be what Scott calls
`simple vectors'. 
			-rpg-

∂20-Sep-82  1039	RPG  	Declarations and Ignorance   
To:   common-lisp at SU-AI  
     From DLW:
     There should NOT be a way to ask the type of a variable!
     CL is not a typed-variable language.  Implementations
     are EXPLCITLY ALLOWED to ignore all type declarations.

This statement seems hastily spoken: if CL has as a goal guaranteeing that
compiled-code semantics are the same as interpretted-code semantics, and
if stock hardware is allowed declarations for the compiler, then it seems
that the interpreter has to be able to obtain the type of a variable.
That is, unless CL is one of those languages whose machine is allowed to
consult an oracle on certain questions.

If you want to EXPLICITLY ALLOW an implementation to IGNORE *ALL* type
declarations, then the way that asks of a variable, what type are you?, can
simply be the constant function #'(LAMBDA (X) 'POINTER).

∂20-Sep-82  1151	Kent M. Pitman <KMP at MIT-MC> 	VAR-TYPE
Date: 20 September 1982 14:51-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject: VAR-TYPE
To: RPG at SU-AI
cc: Common-Lisp at SU-AI

I'm inclined to agree with DLW that asking the type of a variable is a bad 
idea. I cannot think of any reasonable example of a place where VAR-TYPE
could be used in the way you think you are advocating. Can you please make
up and offer for inspection a small piece of code which you think uses
the proposed VAR-TYPE primitive in a way you expect it might typically be 
used? Concrete examples would help a lot. -kmp

∂20-Sep-82  1445	Earl A. Killian            <Killian at MIT-MULTICS> 	declarations
Date:     20 September 1982 1335-pdt
From:     Earl A. Killian            <Killian at MIT-MULTICS>
Subject:  declarations
To:       DLW at SCRC-TENEX at MIT-MC
cc:       Common-Lisp at SU-AI

I think that you're overreacting.  First, an implementation that really
wants to ignore variable declarations can have VAR-TYPE return T.

Second, you don't object to there being a standard way to access the
documentation string of a variable, do you?  Is a declaration really all
that different?  It is a form of documentation...  KMP was asking for
concrete examples of VAR-TYPE use; the most compelling example to me is
simply me typing it at the debugger (or some code in the debugger that
does it automatically for me).

Also, I think Symbolics will be doing its users a disservice if it
chooses to ignore variable declartions.  Given your current hardware,
there may not being any efficiency to be gained, but efficiency is only
one reason for declarations.  Another reason, probably more important,
is that they help catch errors.  For example, JMB once made a typo and
typed ESC ESC setq tab-equivalent CR to Multics EMACS, which set
tab-equivalent to (), which promptly blew everything away.  You can
argue that there are better ways to handle this, but checking assertions
on variables is a nice general mechanism that can catch many errors;
it has all the good properties of making code read-only.

∂20-Sep-82  1456	Guy.Steele at CMU-10A 	Getting the type of a variable  
Date: 20 September 1982 1605-EDT (Monday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Getting the type of a variable

There is a difference between the interpreter knowing the type
of a variable and interpreted user code being able to get that
information.  The interpreter, like the compiler, is a meta-program.
Type declarations are intended as meta-information; they are about
the user program, not part of it.  That is why an implementation is
allowed to ignore it.  (Unfortunately, SPECIAL declarations are part
of the program, for they affect its semantics.)
CL presently only guarantees equivalent compiled and interpreted
semantics for correct programs, that is, programs that never do
anything that "is an error".  It "is an error" for a variable to
take on a value not of a type compatible with any type declaration
for that variable.

∂20-Sep-82  1710	MOON at SCRC-TENEX 	Bit vectors    
Date: Monday, 20 September 1982  20:10-EDT
From: MOON at SCRC-TENEX
To:   Common-Lisp at sail
Subject: Bit vectors

I only just noticed that the Boolean operations on bit-vectors (BIT-AND and so
forth) are non-destructive operations; they return a new bit-vector containing
the result.

This makes bit-vectors completely redundant with integers, which also have
non-destructive mapped Boolean operations (LOGAND and so forth), except for
the way they print and possibly some benefit to be derived from using
generic sequence operations on them.

I had always assumed that the big feature of bit vectors was the efficiency
to be gained by using destructive operations, in applications such as
parallel intersection and union of sets, e.g. in compiler flow analysis.

I would like to propose that Common Lisp either provide destructive bit-vector
operations, which store into their first argument (possibly in addition to the
non-destructive ones), or else that bit vectors be removed from the language
as an unuseful redundancy.

∂21-Sep-82  0938	Scott E. Fahlman <Fahlman at Cmu-20c> 	Indented Strings
Date: Tuesday, 21 September 1982  12:31-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Kent M. Pitman <KMP at MIT-MC>
Cc:   common-lisp at SU-AI
Subject: Indented Strings


KMP has recently proposed (I don't think this went out to the whole
list) that instead of having #"..." be a back door into format, it
should do an auto-fill on the string at readtime, removing any
whitespace following a newline.  Presumably the auto-fill would need to
look at global variables (defaulted by each implementation) to determine
what the fill-column, word-separators, and newline sequence ought to be.

This is getting closer to what we really want for documentation strings,
I think, and it is definitely better than having ~% all over the place.
In fact, such an auto-fill function seems to me like something we might
want to build into the language as a function -- it is at least as
useful as STRING-CAPITALIZE.  In addition to documentation strings, it
would be useful in error messages and for all sorts of interactive
user-prompting stuff.  It is very hard for someone writing portable code
on a Lisp machine to have stuff come out looking nice on a 24x80 screen
-- runtime formatting would help a lot.  We want to keep this fairly
simple, though, and not reinvent Scribe or TEX.

I am still not sure that we want to use #"..." as shorthand for this.
For many uses of the auto-fill function, readtime/compile-time is the
wrong time to do the filling.  What if we just create this new function,
and then state that all documentation strings are run through
STRING-FILL (or whatever) either when they are stashed away or when
printed by DESCRIBE.

-- Scott

∂21-Sep-82  1101	DLW at SCRC-TENEX 	declarations    
Date: Tuesday, 21 September 1982  06:28-EDT
From: DLW at SCRC-TENEX
To:   Earl A. Killian <Killian at MIT-MULTICS>
Cc:   Common-Lisp at SU-AI
Subject: declarations
In-reply-to: The message of 20 Sep 1982  16:35-EDT from Earl A. Killian <Killian at MIT-MULTICS>

No, documentation is not the same thing as declarations.

I think you are overreacting if you read my mail as meaning that
Symbolics Common Lisp is going to throw away all declarations.
I was not talking about any particular implementation.  The
Common Lisp spec, unless I am quite mistaken, is explicit
in saying that implementations are allowed to discard all
declarations except SPECIAL declarations.  I don't know about
you, but I am hoping that there will be other implementations
of Common Lisp someday than the ones that are already under way
now, and this is getting to be a moby big language; part of
the idea of declarations was that an implementation need not
bother with them if it doesn't want to, and this is the sort
of thing that will help people from being discouraged before
they start a new implementation.

∂21-Sep-82  1138	Andy Freeman <CSD.FREEMAN at SU-SCORE> 	Hash table functions
Date: 21 Sep 1982 1109-PDT
From: Andy Freeman <CSD.FREEMAN at SU-SCORE>
Subject: Hash table functions
To: common-lisp at SU-AI

The only way to find the number of entries in a hash table is
to count the number of times that the function arg to maphash is
called.  Since the system has to know this number anyway, even if
as a computation of the number of entries until rehash, the current
size and the threshold, the maphash hack isn't desirable. (If hash
tables are going to be used for large link tables in a discrimination
net, you have to be able to determine the number of entries while
deleting links.)

It is impossible to determine the current size of a hash table.

The user has very little control over a hash table that has been created.
There is no way to shrink it or change its :rehash-threshold/size.
Is this intentional?  (Many applications use tables in distinct
phases, modification and access, and should be able to take advantage
of this.)

-andy
-------

∂21-Sep-82  1322	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	Hash table functions not all there
Date: Tuesday, 21 September 1982, 16:12-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Subject: Hash table functions not all there
To: Andy Freeman <CSD.FREEMAN at SU-SCORE>
Cc: common-lisp at SU-AI
In-reply-to: The message of 21 Sep 82 14:09-EDT from Andy Freeman <CSD.FREEMAN at SU-SCORE>

I agree with you.  Could you (or someone else) send in a concrete proposal
for what is missing?

For reference, here is a list of the interesting messages to hash tables
in the Lisp machine.  I have censored internal messages and messages
that all objects handle.  Clearly not all the operations you request are
here, although ones for finding out the size (both allocated and in-use)
are.

(:CHOOSE-NEW-SIZE :CLEAR-HASH :COPY-HASH :FILLED-ELEMENTS :GET-HASH :GROW :MAP-HASH
 :NEW-ARRAY :NEXT-ELEMENT :PUT-HASH :REM-HASH :SIZE :SWAP-HASH)

∂21-Sep-82  1347	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	LEXICAL declarations 
Date: Tuesday, 21 September 1982, 16:36-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: LEXICAL declarations
To: common-lisp at su-ai

I think Common Lisp should have a "lexical" declaration that is
analogous to the "special" declaration, mainly for symmetry, also
because it is occasionally useful.

∂21-Sep-82  1409	David A. Moon <Moon at SCRC-TENEX at MIT-MC> 	Indented Strings   
Date: Tuesday, 21 September 1982, 16:25-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Subject: Indented Strings
To: Scott E.Fahlman <Fahlman at Cmu-20c>
Cc: Kent M.Pitman <KMP at MIT-MC>, common-lisp at SU-AI
In-reply-to: The message of 21 Sep 82 12:31-EDT from Scott E.Fahlman <Fahlman at Cmu-20c>

    Date: Tuesday, 21 September 1982  12:31-EDT
    From: Scott E. Fahlman <Fahlman at Cmu-20c>

    For many uses of the auto-fill function, readtime/compile-time is the
    wrong time to do the filling.  What if we just create this new function,
    and then state that all documentation strings are run through
    STRING-FILL (or whatever) either when they are stashed away or when
    printed by DESCRIBE.
I like this a lot better.  How do you break paragraphs (preventing auto-filling
between them), when white space at the beginning of a line is discarded.
Presumably a blank line.  I agree that it should be kept simple and make no
claims of being a text justifier, just a "kludge" to make documentation and
error messages come out readably.  We should resist mightily the temptation to
put in a special character that turns off the flushing of white space at the
beginning of a line.

In systems with windows the filling has to be done at the time that it is
printed.  This is also true of systems where the font can be changed and
systems that support more than one width of terminal.  I think this covers all
proposed Common Lisp implementations.  Thus the function is not a string
operation, but a stream operation.  In fact a string operation might be useful
sometimes too, although this might be better handled by making
WITH-OUTPUT-TO-STRING accept keyword options to tell it what line-length to
assume, and then using the stream operation.

RPG Memorial Proposal
∂22-Sep-82  2138	Scott E. Fahlman <Fahlman at Cmu-20c> 	Arrays and vectors (again)
Date: Thursday, 23 September 1982  00:38-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: Arrays and vectors (again)


Several people have stated that they dislike my earlier proposal because
it uses the good names (VECTOR, STRING, BIT-VECTOR, VREF, CHAR, BIT) on
general 1-D arrays, and makes the user say "simple" when he wants one of
the more specialized high-efficiency versions.  This makes extra work
for users, who will want simple vectors at least 95% of the time.  In
addition, there is the argument that simple vectors should be thought of
as a first-class data-type (in implementations that provide them) and
not as a mere degenerate form of array.

Just to see what it looks like, I have re-worked the earlier proposal to
give the good names to the simple forms.  This does not really eliminate
any of the classes in the earlier proposal, since each of those classes
had some attributes or operations that distinguished it from the others.

Since there are getting to be a lot of proposals around, we need some
nomencalture for future discussions.  My first attempt, with the
user-settable :PRINT option should be called the "print-switch"
proposal; the next one, with the heavy use of the :SIMPLE switch should
be the "simple-switch" proposal; this one can be called the "RPG
memorial" proposal.  Let me know what you think about this vs. the
simple-switch version -- I can live with either, but I really would like
to nail this down pretty soon so that we can get on with the
implementation.

**********************************************************************

Arrays can be 1-D or multi-D.  All arrays can be created by MAKE-ARRAY
and can be accessed with AREF.  Storage is done via SETF of an AREF.
1-D arrays are special, in that they are also of type SEQUENCE, and can
be referenced by ELT.  Also, only 1-D arrays can have fill pointers.

Some implementations may provide a special, highly efficient
representation for simple 1-D arrays, which will be of type VECTOR.  A
vector is 1-dimensional, cannot have a fill pointer, cannot be
displaced, and cannot be altered in size after its creation.  To get a
vector, you use the :VECTOR keyword to MAKE-ARRAY with a non-null value.
If there are any conflicting options specified, an error is signalled.
The MAKE-VECTOR form is equivalent to MAKE-ARRAY with :VECTOR T.

A STRING is a VECTOR whose element-type (specified by the :ELEMENT-TYPE
keyword) is STRING-CHAR.  Strings are special in that they print using
the "..." syntax, and they are legal inputs to a class of "string
functions".  Actually, these functions accept any 1-D array whose
element type is STRING-CHAR.  This more general class is called a
CHAR-SEQUENCE. 

A BIT-VECTOR is a VECTOR whose element-type is BIT, alias (MOD 2).
Bit-vectors are special in that they print using the #*... syntax, and
they are legal inputs to a class of boolean bit-vector functions.
Actually, these functions accept any 1-D array whose element-type is
BIT.  This more general class is called a BIT-SEQUENCE.

All arrays can be referenced via AREF, but in some implementations
additional efficiency can be obtained by declaring certain objects to be
vectors, strings, or bit-vectors.  This can be done by normal
type-declarations or by special accessing forms.  The form (VREF v n) is
equivalent to (AREF (THE VECTOR v) n).  The form (CHAR s n) is
equivalent to (AREF (THE STRING s) n).  The form (BIT b n) is equivalent
to (AREF (THE BIT-VECTOR b) n).

If an implementation does not support vectors, the :VECTOR keyword is
ignored except that the error is still signalled on inconsistent cases;
The additional restrictions on vectors are not enforced.  MAKE-VECTOR is
treated just like the equivalent make-array.  VECTORP is true of every
1-D array, STRINGP of every CHAR-SEQUENCE, and BIT-VECTOR of every
BIT-SEQUENCE.

CHAR-SEQUENCEs, including strings, self-eval; all other arrays cause an
error when passed to EVAL.  EQUAL descends into CHAR-SEQUENCEs, but not into
any other arrays.  EQUALP descends into arrays of all kinds, comparing
the corresponding elements with EQUALP.  EQUALP is false if the array
dimensions are not the same, but it is not sensitive to the element-type
of the array, whether it is a vector, etc.  In comparing the dimensions of
vectors, EQUALP uses the length from 0 to the fill pointer; it does not
look at any elements beyond the fill pointer.

The set of type-specifiers required for all of this is ARRAY, VECTOR,
STRING, BIT-VECTOR, SEQUENCE, CHAR-SEQUENCE, and BIT-SEQUENCE.
Each of these has a corresponding type-P predicate, and each can be
specified in list from, along with the element-type and dimension(s).

MAKE-ARRAY takes the following keywords: :ELEMENT-TYPE, :INITIAL-VALUE,
:INITIAL-CONTENTS, :FILL-POINTER, :DISPLACED-TO, :DISPLACED-INDEX-OFFSET,
and :VECTOR.

The following functions are redundant, but should be retained for
clarity and emphasis in code: MAKE-VECTOR, MAKE-STRING, MAKE-BIT-VECTOR.
MAKE-VECTOR takes a single length argument, along with :ELEMENT-TYPE,
:INITIAL-VALUE, and :INITIAL-CONTENTS.  MAKE-STRING and MAKE-BIT-VECTOR
are like MAKE-VECTOR, but do not take the :ELEMENT-TYPE keyword, since
the element-type is implicit.

If the :VECTOR keyword is not specified to MAKE-ARRAY or related forms,
the default is NIL.  However, sequences produced by random forms such as
CONCATENATE are vectors.

Strings always are printed using the "..." syntax.  Bit-vectors always
are printed using the #*... syntax.  Other vectors always print using
the #(...) syntax.  Note that in the latter case, any element-type
restriction is lost upon readin, since this form always produces a
vector of type T when it is read.  However, the new vector will be
EQUALP to the old one.  The #(...) syntax observes PRINLEVEL,
PRINLENGTH, and SUPPRESS-ARRAY-PRINTING.  The latter switch, if non-NIL,
causes the array to print in a non-readable form: #<ARRAY...>.

CHAR-SEQUENCEs print out as though they were strings, using the "..."
syntax.  BIT-SEQUENCES print out as BIT-STRINGS, using the #*... syntax.
All other arrays print out using the #nA(...) syntax, where n is the
number of dimensions and the list is actually a list of lists of lists,
nested n levels deep.  The array elements appear at the lowest level.
The #A syntax also observes PRINLEVEL, PRINLENGTH, and
SUPPRESS-ARRAY-PRINTING.  The #A format reads in as a non-displaced
array of element-type T.

Note that when an array is printed and read back in, the new version is
EQUALP to the original, but some information about the original is lost:
whether the original was a vector or not, element type restrictions,
whether the array was displaced, whether there was a fill pointer, and
the identity of any elements beyond the fill-pointer.  This choice was
made to favor ease of interactive use; if the user really wants to
preserve in printed form some complex data structure containing more
complex arrays, he will have to develop his own print format and printer.

∂23-Sep-82  0449	DLW at MIT-MC 	Arrays and vectors (again)    
Date: Thursday, 23 September 1982  07:48-EDT
Sender: DLW at MIT-OZ
From: DLW at MIT-MC
To:   Scott E. Fahlman <Fahlman at Cmu-20c>
Cc:   common-lisp at SU-AI
Subject: Arrays and vectors (again)

In your latest ("RPG memorial") proposal, strings are vectors, and so if I
write a program that creates a string with a fill pointer, it may not work
in some Common Lisp implementations.  This has been my main objection to
most of the earlier proposals.  Strings with fill pointers are extremely
useful.
-------

∂23-Sep-82  0702	Leonard N. Zubkoff <Zubkoff at Cmu-20c> 	Arrays and vectors (again)   
Date: Thursday, 23 September 1982  09:49-EDT
From: Leonard N. Zubkoff <Zubkoff at Cmu-20c>
To:   Scott E. Fahlman <Fahlman at CMU-20C>
Cc:   common-lisp at SU-AI
Subject: Arrays and vectors (again)

My vote goes for the new "RPG memorial" proposal.  I think the name assignments
are far more reasonable in this version.

		Leonard

∂23-Sep-82  0929	Scott E. Fahlman <Fahlman at Cmu-20c> 	Arrays and vectors (again)
Date: Thursday, 23 September 1982  12:14-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   DLW at MIT-MC
Cc:   common-lisp at SU-AI
Subject: Arrays and vectors (again)


The "RPG memorial" proposal contains almost exactly the same machinery
as the "simple-switch" proposal; only the names have been changed to
protect the simple.  With regard to strings, the difference is that
asking for a "string" gets you what was previously called a simple
string -- no fill pointer.  You can still get a string-like object with
a fill pointer, but you have to get it via MAKE-ARRAY.  The "string"
functions still work on it, and it still prints out with the
double-quote syntax.  On read-in of a "..." form, you end up with a
simple string, but everyone agreed to that earlier, I believe.  It would
be very awkward, and not too useful, to have a printing syntax that
preserved the fill pointer and the characters beyond it.

While I agree that "strings with fill pointers" are essential things to
have around, I think that they are needed in relatively few places, so
a name-change to favor the more common simple case should not be too
difficult to live with.  Am I missing something here?

-- Scott

∂25-Sep-82  0338	Kent M. Pitman <KMP at MIT-MC> 	Arrays and Vectors
Date: 25 September 1982 06:39-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject:  Arrays and Vectors
To: Fahlman at CMU-20C
cc: common-lisp at SU-AI

I am in agreement with much of the "RPG memorial" proposal. A few comments
on the few parts that left me feeling uneasy...
-----
    Date: Thursday, 23 September 1982  00:38-EDT
    From: Scott E. Fahlman <Fahlman at Cmu-20c>
    Re:   Arrays and vectors (again)

    ... If an implementation does not support vectors, the :VECTOR keyword is
    ignored except that the error is still signalled on inconsistent cases;
    The additional restrictions on vectors are not enforced.  MAKE-VECTOR is
    treated just like the equivalent make-array.  VECTORP is true of every
    1-D array, STRINGP of every CHAR-SEQUENCE, and BIT-VECTOR of every
    BIT-SEQUENCE.
-----
If an implementation DOES support vectors, does VECTORP return true for 
all 1-D arrays? If not, I think you have this backwards. If an implementation
doesn't support vectors, VECTORP, STRINGP, and BIT-VECTOR should always
return NIL... I think this answers DLW's point about strings wanting to not
be vectors. In his system, vectors will not exist, so he may write:
 (COND ((VECTORP foo) (PRIMITIVE-THAT-DOESNT-WORK-ON-REAL-VECTORS foo))
       ...)
and though erroneous, it would run in his implementation. He wouldn't find
out that it that primitive wouldn't work on real vectors until he ported his
code to other systems. Further, he can't write
 (COND ((VECTORP foo) ...code for sites with vectors...)
       (T ...code for things that wouldnt' be vectors at other sites...))
because the things that wouldn't be vectors at other sites are still vectors
on his machine which doesn't claim to support vectors.

The right thing for sites that don't have vectors is to make them punt and
always use the fully generic operators. You'll never get to code that calls
hairy generic stuff if you have VECTORP lie and say everything is simple!
You want it to lie and say everything is not.
-----
    CHAR-SEQUENCEs, including strings, self-eval; all other arrays cause an
    error when passed to EVAL...
-----
In thinking about this, I'm relatively convinced that the exact set of things
which want to self-eval are those things which are intended to be typed in.
The reason is that other things just don't tend to wind up in evaluable
positions. I can't imagine how I could ever get (PRINT <hairy-string>)
very easily and I'm inclined to think it's an error. It's easy to see how
(PRINT <var-holding-hairy-string>) can happen, but that's not going to
cause the hairy-string to be EVAL'd, so it doesn't matter. Seems like it'd be
worth the error checking to make only strings self-eval. 
-----
    EQUAL descends into CHAR-SEQUENCEs, but not into any other arrays.
-----
I would argue that this also follows from the fact that the contents of this
kind of array are always visible. In this sense, this satisfies the novice's
heuristic about EQUAL that says if two things print the same, they are 
probably EQUAL. I suspect that the same reasoning says that BIT-SEQUENCEs
should also be descended by EQUAL. It should probably be made clear that 
EQUAL descends only the main data area of the arrays it descends and not the
array leader. Indeed, I assume that a string (having no array leader) can be
equal to a CHARACTER-SEQUENCE which has one if the main data areas are the
same? In that case, does the fill-pointer looked at in the complex case?
I assume so. That should also be made explicit in the documentation.
-----
    EQUALP descends into arrays of all kinds, comparing
    the corresponding elements with EQUALP.  EQUALP is false if the array
    dimensions are not the same, but it is not sensitive to the element-type
    of the array, whether it is a vector, etc.  In comparing the dimensions of
    vectors, EQUALP uses the length from 0 to the fill pointer; it does not
    look at any elements beyond the fill pointer.
-----
Again, I take it that it doesn't descend array leaders?

∂25-Sep-82  0716	Guy.Steele at CMU-10A 	KMP's remarks on arrays    
Date: 25 September 1982 1016-EDT (Saturday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  KMP's remarks on arrays

Recall, as a point of fact, that array leaders have been removed
from Common LISP as a user-visible feature.
--Guy

∂26-Sep-82  1958	Scott E. Fahlman <Fahlman at Cmu-20c> 	Reply to KMP    
Date: Sunday, 26 September 1982  22:58-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Kent M. Pitman <KMP at MIT-MC>
Cc:   common-lisp at SU-AI
Subject: Reply to KMP


I disagree with KMP's analysis of what VECTOR should do in the "RPG
memorial" proposal.  I think I confused him by talking about
"implementations that do not support vectors".  He seems to believe that
VREF, CHAR, and BIT would not work in such implementations.  That was
not my intent.

Perhaps the right way to look at it is to say that EVERY Common Lisp
implementation supports vectors.  In some implementations (notably
Zetalisp) vectors and 1-D arrays are the same thing; in other
implementations (including Vax and Spice Lisp) vectors are a restricted
subset of 1-D arrays.  VREF works only on vectors (it is equivalent to
AREF with a VECTOR declaration).  That means that in Zetalisp, VREF
would work on every 1-D array, and VECTORP would be true for every 1-D
array.  BIT-VECTORS and STRINGS would likewise be identical to
BIT-SEQUENCES and CHAR-SEQUENCES in Zetalisp.  I don't think this is
backwards.

I think that simple strings have to self-eval.  If it were up to me, all
arrays would self-eval, but this was voted down because it was felt that
it provided too little error checking.  I don't care whether general
char-sequences (or complex strings, whatever) self-eval or not, but I
think the Zetalisp folks would like the complex and simple strings to
behave pretty much the same.  I don't think that we want bit-vectors to
self-eval unless every vector does.  Similarly, I think EQUAL has to go
down into strings; it should not go down into bit-sequences unless it
goes down into every vector.

There are no user-visible array leaders in Common Lisp.  If an
implementation wants to provide user-visible additions to the Common
Lisp data structures (array leaders or property lists on strings or
whatever) it is up to that implementation to describe how these things
interact with the built-in features; all that is required is that legal
Common Lisp code run without modification.  I would suggest that EQUAL
and EQUALP not descend into such things, but it is really none of Common
lisp's business.

-- Scott

∂26-Sep-82  2128	STEELE at CMU-20C 	Revised proposed evaluator(s)  
Date: 27 Sep 1982 0027-EDT
From: STEELE at CMU-20C
Subject: Revised proposed evaluator(s)
To: common-lisp at SU-AI

In response to comments on the proposed sample Common LISP
evaluator, I have made these changes:
(1) Fixed an EVALHOOK bug; now the variable EVALHOOK is bound to NIL
    over the invocation of the hook function.
(2) Fixed a bug in BLOCK; now the normal return values are properly returned.
(3) Fixed PROG to parse the declarations properly and put them in the LET
    used to bind the variables.
(4) EVAL now calls *EVAL, not %EVAL, for parallelism with other versions.
Enclosed is the fixed version 1, and also version 2, which uses special
variables for VENV, FENV, BENV, and GENV to avoid parameter passing for
these slowly-changing variables.  (Version 3, which is the bummed version
for Spice LISP, is about half-done and not enclosed here.)
--Guy
-----------------------------------------------------------
;;; This evaluator splits the lexical environment into four
;;; logically distinct entities:
;;;	VENV = lexical variable environment
;;;	FENV = lexical function and macro environment
;;;	BENV = block name environment
;;;	GENV = go tag environment
;;; Each environment is an a-list.  It is never the case that
;;; one can grow and another shrink simultaneously; the four
;;; parts could be united into a single a-list.  The four-part
;;; division saves consing and search time.
;;;
;;; Each entry in VENV has one of two forms: (VAR VALUE) or (VAR).
;;; The first indicates a lexical binding of VAR to VALUE, and the
;;; second indicates a special binding of VAR (implying that the
;;; special value should be used).
;;;
;;; Each entry in FENV looks like (NAME TYPE . FN), where NAME is the
;;; functional name, TYPE is either FUNCTION or MACRO, and FN is the
;;; function or macro-expansion function, respectively.  Entries of
;;; type FUNCTION are made by FLET and LABELS; those of type MACRO
;;; are made by MACROLET.
;;;
;;; Each entry in BENV looks like (NAME NIL), where NAME is the name
;;; of the block.  The NIL is there primarily so that two distinct
;;; conses will be present, namely the entry and the cdr of the entry.
;;; These are used internal as catch tags, the first for RETURN and the
;;; second for RESTART.  If the NIL has been clobbered to be INVALID,
;;; then the block has been exited, and a return to that block is an error.
;;;
;;; Each entry in GENV looks like (TAG MARKER . BODY), where TAG is
;;; a go tag, MARKER is a unique cons used as a catch tag, and BODY
;;; is the statement sequence that follows the go tag.  If the car of
;;; MARKER, normally NIL, has been clobbered to be INVALID, then
;;; the tag body has been exited, and a go to that tag is an error.

;;; An interpreted-lexical-closure contains a function (normally a
;;; lambda-expression) and the lexical environment.

(defstruct interpreted-lexical-closure function venv fenv benv genv)


;;; The EVALHOOK feature allows a user-supplied function to be called
;;; whenever a form is to be evaluated.  The presence of the lexical
;;; environment requires an extension of the feature as it is defined
;;; in MacLISP.  Here, the user hook function must accept not only
;;; the form to be evaluated, but also the components of the lexical
;;; environment; these must then be passed verbatim to EVALHOOK or
;;; *EVAL in order to perform the evaluation of the form correctly.
;;; The precise number of components should perhaps be allowed to be
;;; implementation-dependent, so it is probably best to require the
;;; user hook function to accept arguments as (FORM &REST ENV) and
;;; then to perform evaluation by (APPLY #'EVALHOOK FORM HOOKFN ENV),
;;; for example.

(defvar evalhook nil)

(defun evalhook (exp hookfn venv fenv benv genv)
  (let ((evalhook hookfn)) (%eval exp venv fenv benv genv)))

(defun eval (exp)
  (*eval exp nil nil nil nil))

;;; *EVAL looks useless here, but does more complex things
;;; in alternative implementations of this evaluator.

(defun *eval (exp venv fenv benv genv)
  (%eval exp venv fenv benv genv))
!
;;; Function names beginning with "%" are intended to be internal
;;; and not defined in the Common LISP white pages.

;;; %EVAL is the main evaluation function.

(defun %eval (exp venv fenv benv genv)
  (if (not (null evalhook))
      (let ((hookfn evalhook) (evalhook nil))
	(funcall hookfn exp venv fenv benv genv))
      (typecase exp
	;; A symbol is first looked up in the lexical variable environment.
	(symbol (let ((slot (assoc exp venv)))
		  (cond ((and (not (null slot)) (not (null (cdr slot))))
			 (cadr slot))
			((boundp exp) (symbol-value exp))
			(t (cerror :unbound-variable
				   "The symbol ~S has no value"
				   exp)))))
	;; Numbers, string, and characters self-evaluate.
	((or number string character) exp)
	;; Conses require elaborate treatment based on the car.
	(cons (typecase (car exp)
		;; A symbol is first looked up in the lexical function environment.
		;; This lookup is cheap if the environment is empty, a common case.
		(symbol
		 (let ((fn (car exp)))
		   (loop (let ((slot (assoc fn fenv)))
			   (unless (null slot)
			     (return (case (cadr slot)
				       (macro (%eval (%macroexpand
						      (cddr slot)
						      (if (eq fn (car exp))
							  exp
							  (cons fn (cdr exp))))))
				       (function (%apply (cddr slot)
							 (%evlis (cdr exp) venv fenv benv genv)))
				       (t <implementation-error>)))))
			 ;; If not in lexical function environment,
			 ;;  try the definition cell of the symbol.
			 (when (fboundp fn)
			   (return (cond ((special-form-p fn)
					  (%invoke-special-form
					   fn (cdr exp) venv fenv benv genv))
					 ((macro-p fn)
					  (%eval (%macroexpand
						  (get-macro-function (symbol-function fn))
						  (if (eq fn (car exp))
						      exp
						      (cons fn (cdr exp))))
						 venv fenv benv genv))
					 (t (%apply (symbol-function fn)
						    (%evlis (cdr exp) venv fenv benv genv))))))
			 (setq fn
			       (cerror :undefined-function
				       "The symbol ~S has no function definition"
				       fn))
			 (unless (symbolp fn)
			   (return (%apply fn (%evlis (cdr exp) venv fenv benv genv)))))))
		;; A cons in function position must be a lambda-expression.
		;; Note that the construction of a lexical closure is avoided here.
		(cons (%lambda-apply (car exp) venv fenv benv genv
				     (%evlis (cdr exp) venv fenv benv genv)))
		(t (%eval (cerror :invalid-form
				  "Cannot evaluate the form ~S: function position has invalid type ~S"
				  exp (type-of (car exp)))
			  venv fenv benv genv))))
	(t (%eval (cerror :invalid-form
			  "Cannot evaluate the form ~S: invalid type ~S"
			  exp (type-of exp))
		  venv fenv benv genv)))))
!
;;; Given a list of forms, evaluate each and return a list of results.

(defun %evlis (forms venv fenv benv genv)
  (mapcar #'(lambda (form) (%eval form venv fenv benv genv)) forms))

;;; Given a list of forms, evaluate each, discarding the results of
;;; all but the last, and returning all results from the last.

(defun %evprogn (body venv fenv benv genv)
  (if (endp body) nil
      (do ((b body (cdr b)))
	  ((endp (cdr b))
	   (%eval (car b) venv fenv benv genv))
	(%eval (car b) venv fenv benv genv))))

;;; APPLY takes a function, a number of single arguments, and finally
;;; a list of all remaining arguments.  The following song and dance
;;; attempts to construct efficiently a list of all the arguments.

(defun apply (fn firstarg &rest args*)
  (%apply fn
	  (cond ((null args*) firstarg)
		((null (cdr args*)) (cons firstarg (car args*)))
		(t (do ((x args* (cdr x))
			(z (cddr args*) (cdr z)))
		       ((null z)
			(rplacd x (cadr x))
			(cons firstarg (car args*))))))))
!
;;; %APPLY does the real work of applying a function to a list of arguments.

(defun %apply (fn args)
  (typecase fn
    ;; For closures over dynamic variables, complex magic is required.
    (closure (with-closure-bindings-in-effect fn
					      (%apply (closure-function fn) args)))
    ;; For a compiled function, an implementation-dependent "spread"
    ;;  operation and invocation is required.
    (compiled-function (%invoke-compiled-function fn args))
    ;; The same goes for a compiled closure over lexical variables.
    (compiled-lexical-closure (%invoke-compiled-lexical-closure fn args))
    ;; The treatment of interpreted lexical closures is elucidated fully here.
    (interpreted-lexical-closure
     (%lambda-apply (interpreted-lexical-closure-function fn)
		    (interpreted-lexical-closure-venv fn)
		    (interpreted-lexical-closure-fenv fn)
		    (interpreted-lexical-closure-benv fn)
		    (interpreted-lexical-closure-genv fn)
		    args))
    ;; For a symbol, the function definition is used, if it is a function.
    (symbol (%apply (cond ((not (fboundp fn))
			   (cerror :undefined-function
				   "The symbol ~S has no function definition"
				   fn))
			  ((special-form-p fn)
			   (cerror :invalid-function
				   "The symbol ~S cannot be applied: it names a special form"
				   fn))
			  ((macro-p fn)
			   (cerror :invalid-function
				   "The symbol ~S cannot be applied: it names a macro"
				   fn))
			  (t (symbol-function fn)))
		    args))
    ;; Applying a raw lambda-expression uses the null lexical environment.
    (cons (if (eq (car fn) 'lambda)
	      (%lambda-apply fn nil nil nil nil args)
	      (%apply (cerror :invalid-function
			      "~S is not a valid function"
			      fn)
		      args)))
    (t (%apply (cerror :invalid function
		       "~S has an invalid type ~S for a function"
		       fn (type-of fn))
	       args))))
!
;;; %LAMBDA-APPLY is the hairy part, that takes care of applying
;;; a lambda-expression in a given lexical environment to given
;;; arguments.  The complexity arises primarily from the processing
;;; of the parameter list.
;;;
;;; If at any point the lambda-expression is found to be malformed
;;; (typically because of an invalid parameter list), or if the list
;;; of arguments is not suitable for the lambda-expression, a correctable
;;; error is signalled; correction causes a throw to be performed to
;;; the tag %LAMBDA-APPLY-RETRY, passing back a (possibly new)
;;; lambda-expression and a (possibly new) list of arguments.
;;; The application is then retried.  If the new lambda-expression
;;; is not really a lambda-expression, then %APPLY is used instead of
;;; %LAMBDA-APPLY.
;;;
;;; In this evaluator, PROGV is used to instantiate variable bindings
;;; (though its use is embedded with a macro called %BIND-VAR).
;;; The throw that precedes a retry will cause special bindings to
;;; be popped before the retry.

(defun %lambda-apply (lexp venv fenv benv genv args)
  (multiple-value-bind (newfn newargs)
		       (catch '%lambda-apply-retry
			 (return-from %lambda-apply
			   (%lambda-apply-1 lexp venv fenv benv genv args)))
    (if (and (consp lexp) (eq (car lexp) 'lambda))
	(%lambda-apply newfn venv fenv benv genv newargs)
	(%apply newfn newargs))))

;;; Calling this function will unwind all special variables
;;; and cause FN to be applied to ARGS in the original lexical
;;; and dynamic environment in force when %LAMBDA-APPLY was called.

(defun %lambda-apply-retry (fn args)
  (throw '%lambda-apply-retry (values fn args)))

;;; This function is convenient when the lambda expression is found
;;; to be malformed.  REASON should be a string explaining the problem.

(defun %bad-lambda-exp (lexp oldargs reason)
  (%lambda-apply-retry
   (cerror :invalid-function
	   "Improperly formed lambda-expression ~S: ~A"
	   lexp reason)
   oldargs))

;;; (%BIND-VAR VAR VALUE . BODY) evaluates VAR to produce a symbol name
;;; and VALUE to produce a value.  If VAR is determined to have been
;;; declared special (as indicated by the current binding of the variable
;;; SPECIALS, which should be a list of symbols, or by a SPECIAL property),
;;; then a special binding is established using PROGV.  Otherwise an
;;; entry is pushed onto the a-list presumed to be in the variable VENV.

(defmacro %bind-var (var value &body body)
  `(let ((var ,var) (value ,value))
     (let ((specp (or (member var specials) (get var 'special))))
       (progv (and specp (list var)) (and specp (list value))
	 (push (if specp (list var) (list var value)) venv)
	 ,@body))))

;;; %LAMBDA-KEYWORD-P is true iff X (which must be a symbol)
;;; has a name beginning with an ampersand.

(defun %lambda-keyword-p (x)
  (char= #\& (char 0 (symbol-pname x))))
!
;;; %LAMBDA-APPLY-1 is responsible for verifying that LEXP is
;;; a lambda-expression, for extracting a list of all variables
;;; declared SPECIAL in DECLARE forms, and for finding the
;;; body that follows any DECLARE forms.

(defun %lambda-apply-1 (lexp venv fenv benv genv args)
  (cond ((or (not (consp lexp))
	     (not (eq (car lexp) 'lambda))
	     (atom (cdr lexp))
	     (not (listp (cadr lexp))))
	 (%bad-lambda-exp lexp args "improper lambda or lambda-list"))
	(t (do ((body (cddr lexp) (cdr body))
		(specials '()))
	       ((or (endp body)
		    (not (listp (car body)))
		    (not (eq (caar body) 'declare)))
		(%bind-required lexp args (cadr lexp) venv fenv benv genv venv args specials body))
	     (dolist (decl (cdar body))
	       (when (eq (car decl) 'special)
		 (setq specials
		       (if (null specials)		;Avoid consing
			   (cdar decl)
			   (append (cdar decl) specials)))))))))

;;; %BIND-REQUIRED handles the pairing of arguments to required parameters.
;;; Error checking is performed for too few or too many arguments.
;;; If a lambda-list keyword is found, %TRY-OPTIONAL is called.
;;; Here, as elsewhere, if the binding process terminates satisfactorily
;;; then the body is evaluated using %EVPROGN in the newly constructed
;;; dynamic and lexical environment.

(defun %bind-required (lexp oldargs varlist fenv benv genv venv args specials body)
  (cond ((endp varlist)
	 (if (null args)
	     (%evprogn body venv fenv benv genv)
	     (%lambda-apply-retry lexp
				  (cerror :too-many-arguments
					  "Too many arguments for function ~S: ~S"
					  lexp args))))
	((not (symbolp (car varlist)))
	 (%bad-lambda-exp lexp oldargs "required parameter name not a symbol"))
	((%lambda-keyword-p (car varlist))
	 (%try-optional lexp oldargs varlist fenv benv genv venv args specials body))
	((null args)
	 (%lambda-apply-retry lexp 
			      (cerror :too-few-arguments
				      "Too few arguments for function ~S: ~S"
				      lexp oldargs)))
	  (t (%bind-var (car varlist) (car args)
			(%bind-required lexp oldargs varlist fenv benv genv venv (cdr args) specials body)))))
!
;;; %TRY-OPTIONAL determines whether the lambda-list keyword &OPTIONAL
;;; has been found.  If so, optional parameters are processed; if not,
;;; the buck is passed to %TRY-REST.

(defun %try-optional (lexp oldargs varlist fenv benv genv venv args specials body)
  (cond ((eq (car varlist) '&optional)
	 (%bind-optional lexp oldargs (cdr varlist) fenv benv genv venv args specials body))
	(t (%try-rest lexp oldargs varlist fenv benv genv venv args specials body))))

;;; %BIND-OPTIONAL determines whether the parameter list is exhausted.
;;; If not, it parses the next specifier.

(defun %bind-optional (lexp oldargs varlist fenv benv genv venv args specials body)
  (cond ((endp varlist)
	 (if (null args)
	     (%evprogn body venv fenv benv genv)
	     (%lambda-apply-retry lexp
				  (cerror :too-many-arguments
					  "Too many arguments for function ~S: ~S"
					  lexp args))))
	(t (let ((varspec (car varlist)))
	     (cond ((symbolp varspec)
		    (if (%lambda-keyword-p varspec)
			(%try-rest lexp oldargs varlist fenv benv genv venv args specials body)
			(%process-optional lexp oldargs varlist fenv benv genv
					   venv args specials body varspec nil nil)))
		   ((and (consp varspec)
			 (symbolp (car varspec))
			 (listp (cdr varspec))
			 (or (endp (cddr varspec))
			     (and (symbolp (caddr varspec))
				  (not (endp (caddr varspec)))
				  (endp (cdddr varspec)))))
		    (%process-optional lexp oldargs varlist fenv benv genv
				       venv args specials body
				       (car varspec)
				       (cadr varspec)
				       (caddr varspec)))
		   (t (%bad-lambda-exp lexp oldargs "malformed optional parameter specifier")))))))

;;; %PROCESS-OPTIONAL takes care of binding the parameter,
;;; and also the supplied-p variable, if any.

(defun %process-optional (lexp oldargs varlist fenv benv genv venv args specials body var init varp)
  (let ((value (if (null args) (%eval init venv fenv benv genv) (car args))))
    (%bind-var var value
      (if varp
	  (%bind-var varp (not (null args))
	    (%bind-optional lexp oldargs varlist fenv benv genv venv args specials body))
	  (%bind-optional lexp oldargs varlist fenv benv genv venv args specials body)))))
!
;;; %TRY-REST determines whether the lambda-list keyword &REST
;;; has been found.  If so, the rest parameter is processed;
;;; if not, the buck is passed to %TRY-KEY, after a check for
;;; too many arguments.

(defun %try-rest (lexp oldargs varlist fenv benv genv venv args specials body)
  (cond ((eq (car varlist) '&rest)
	 (%bind-rest lexp oldargs (cdr varlist) fenv benv genv venv args specials body))
	((and (not (eq (car varlist) '&key))
	      (not (null args)))
	 (%lambda-apply-retry lexp
			      (cerror :too-many-arguments
				      "Too many arguments for function ~S: ~S"
				      lexp args)))
	(t (%try-key lexp oldargs varlist fenv benv genv venv args specials body))))

;;; %BIND-REST ensures that there is a parameter specifier for
;;; the &REST parameter, binds it, and then evaluates the body or
;;; calls %TRY-KEY.

(defun %bind-rest (lexp oldargs varlist fenv benv genv venv args specials body)
  (cond ((or (endp varlist)
	     (not (symbolp (car varlist))))
	 (%bad-lambda-exp lexp oldargs "missing rest parameter specifier"))
	(t (%bind-var (car varlist) args
	     (cond ((endp (cdr varlist))
		    (%evprogn body venv fenv benv genv))
		   ((and (symbolp (cadr varlist))
			 (%lambda-keyword-p (cadr varlist)))
		    (%try-key lexp oldargs varlist fenv benv genv venv args specials body))
		   (t (%bad-lambda-exp lexp oldargs "malformed after rest parameter specifier")))))))
!
;;; %TRY-KEY determines whether the lambda-list keyword &KEY
;;; has been found.  If so, keyword parameters are processed;
;;; if not, the buck is passed to %TRY-AUX.

(defun %try-key (lexp oldargs varlist fenv benv genv venv args specials body)
  (cond ((eq (car varlist) '&key)
	 (%bind-key lexp oldargs (cdr varlist) fenv benv genv venv args specials body nil))
	(t (%try-aux lexp oldargs varlist fenv benv genv venv specials body))))

;;; %BIND-KEY determines whether the parameter list is exhausted.
;;; If not, it parses the next specifier.

(defun %bind-key (lexp oldargs varlist fenv benv genv venv args specials body keys)
  (cond ((endp varlist)
	 ;; Optional error check for bad keywords.
	 (do ((a args (cddr a)))
	     ((endp args))
	   (unless (member (car a) keys)
	     (cerror :unexpected-keyword
		     "Keyword not expected by function ~S: ~S"
		     lexp (car a))))
	 (%evprogn body venv fenv benv genv))
	(t (let ((varspec (car varlist)))
	     (cond ((symbolp varspec)
		    (if (%lambda-keyword-p varspec)
			(cond ((not (eq varspec '&allow-other-keywords))
			       (%try-aux lexp oldargs varlist fenv benv genv venv specials body))
			      ((endp (cdr varlist))
			       (%evprogn body venv fenv benv genv))
			      ((%lambda-keyword-p (cadr varlist))
			       (%try-aux lexp oldargs (cdr varlist) fenv benv genv venv specials body))
			      (t (%bad-lambda-exp lexp oldargs "invalid after &ALLOW-OTHER-KEYWORDS")))
			(%process-key lexp oldargs varlist fenv benv genv
				      venv args specials body keys
				      (intern varspec keyword-package)
				      varspec nil nil)))
		   ((and (consp varspec)
			 (or (symbolp (car varspec))
			     (and (consp (car varspec))
				  (consp (cdar varspec))
				  (symbolp (cadar varspec))
				  (endp (cddar varspec))))
			 (listp (cdr varspec))
			 (or (endp (cddr varspec))
			     (and (symbolp (caddr varspec))
				  (not (endp (caddr varspec)))
				  (endp (cdddr varspec)))))
		    (%process-key lexp oldargs varlist fenv benv genv
				  venv args specials body keys
				  (if (consp (car varspec))
				      (caar varspec)
				      (intern (car varspec) keyword-package))
				  (if (consp (car varspec))
				      (cadar varspec)
				      (car varspec))
				  (cadr varspec)
				  (caddr varspec)))
		   (t (%bad-lambda-exp lexp oldargs "malformed keyword parameter specifier")))))))

;;; %PROCESS-KEY takes care of binding the parameter,
;;; and also the supplied-p variable, if any.

(defun %process-key (lexp oldargs varlist fenv benv genv venv args specials body keys kwd var init varp)
  (let ((value (do ((a args (cddr a)))
		   ((endp a) (%eval init venv fenv benv genv))
		 (when (eq (car a) kwd)
		   (return (cadr a))))))
    (%bind-var var value
      (if varp
	  (%bind-var varp (not (null args))
	    (%bind-key lexp oldargs varlist fenv benv genv venv args specials body (cons kwd keys)))
	  (%bind-key lexp oldargs varlist fenv benv genv venv args specials body (cons kwd keys))))))
!
;;; %TRY-AUX determines whether the keyword &AUX
;;; has been found.  If so, auxiliary variables are processed;
;;; if not, an error is signalled.

(defun %try-aux (lexp oldargs varlist fenv benv genv venv specials body)
  (cond ((eq (car varlist) '&aux)
	 (%bind-aux lexp oldargs (cdr varlist) fenv benv genv venv specials body))
	(t (%bad-lambda-exp lexp oldargs "unknown or misplaced lambda-list keyword"))))

;;; %BIND-AUX determines whether the parameter list is exhausted.
;;; If not, it parses the next specifier.

(defun %bind-aux (lexp oldargs varlist fenv benv genv venv specials body)
  (cond ((endp varlist)
	 (%evprogn body venv fenv benv genv))
	(t (let ((varspec (car varlist)))
	     (cond ((symbolp varspec)
		    (if (%lambda-keyword-p varspec)
			(%bad-lambda-exp lexp oldargs "unknown or misplaced lambda-list keyword")
			(%process-aux lexp oldargs varlist fenv benv genv
				      venv specials body varspec nil)))
		   ((and (consp varspec)
			 (symbolp (car varspec))
			 (listp (cdr varspec))
			 (endp (cddr varspec)))
		    (%process-aux lexp oldargs varlist fenv benv genv
				       venv specials body
				       (car varspec)
				       (cadr varspec)))
		   (t (%bad-lambda-exp lexp oldargs "malformed aux variable specifier")))))))

;;; %PROCESS-AUX takes care of binding the auxiliary variable.

(defun %process-aux (lexp oldargs varlist fenv benv genv venv specials body var init)
    (%bind-var var (and init (%eval init venv fenv benv genv))
       (%bind-aux lexp oldargs varlist fenv benv genv venv specials body)))
!
;;; Definitions for various special forms and macros.

(defspec quote (obj) (venv fenv benv genv) obj)

(defspec function (fn) (venv fenv benv genv)
  (cond ((consp fn)
	 (cond ((eq (car fn) 'lambda)
		(make-interpreted-closure :function fn :venv venv :fenv fenv :benv benv :genv genv))
	       (t (cerror ???))))
	((symbolp fn)
	 (loop (let ((slot (assoc fn fenv)))
		 (unless (null slot)
		   (case (cadr slot)
		     (macro (cerror ???))
		     (function (return (cddr slot)))
		     (t <implementation-error>))))
	       (when (fboundp fn)
		 (cond ((or (special-form-p fn) (macro-p fn))
			(cerror ???))
		       (t (return (symbol-function fn)))))
	       (setq fn (cerror :undefined-function
				"The symbol ~S has no function definition"
				fn))
	       (unless (symbolp fn) (return fn))))
	(t (cerror ???))))

(defspec if (pred con &optional alt) (venv fenv benv genv)
  (if (%eval pred venv fenv benv genv)
      (%eval con venv fenv benv genv)
      (%eval alt venv fenv benv genv)))

;;; The BLOCK construct provides a PROGN with a named contour around it.
;;; It is interpreted by first putting an entry onto BENV, consisting
;;; of a 2-list of the name and NIL.  This provides two unique conses
;;; for use as catch tags.  Then the body is executed.
;;; If a RETURN or RESTART is interpreted, a throw occurs.  If the BLOCK
;;; construct is exited for any reason (including falling off the end, which
;;; retu rns the results of evaluating the last form in the body), the NIL in
;;; the entry is clobbered to be INVALID, to indicate that that particular
;;; entry is no longer valid for RETURN or RESTART.

(defspec block (name &body body) (venv fenv benv genv)
  (let ((slot (list name nil)))	;Use slot for return, (cdr slot) for restart
    (unwind-protect
     (catch slot
       (block exit
	 (loop (catch (cdr slot)
		 (return-from exit
		   (%evprogn body venv fenv (cons slot benv) genv))))))
     (rplaca (cdr slot) 'invalid)))) 

(defspec return (form) (venv fenv benv genv)
  (let ((slot (assoc nil benv)))
    (cond ((null slot) (ferror ???<unseen-block-name>))
	  ((eq (cadr slot) 'invalid) (ferror ???<block-name-no-longer-valid>))
	  (t (throw slot (%eval form venv fenv benv genv))))))

(defspec return-from (name form) (venv fenv benv genv)
  (let ((slot (assoc name benv)))
    (cond ((null slot) (ferror ???<unseen-block-name>))
	  ((eq (cadr slot) 'invalid) (ferror ???<block-name-no-longer-valid>))
	  (t (throw slot (%eval form venv fenv benv genv))))))

(defspec restart (form) (venv fenv benv genv)
  (let ((slot (assoc nil benv)))
    (cond ((null slot) (ferror ???<unseen-block-name>))
	  ((eq (cadr slot) 'invalid) (ferror ???<block-name-no-longer-valid>))
	  (t (throw (cdr slot) (%eval form venv fenv benv genv))))))

(defspec restart-from (name form) (venv fenv benv genv)
  (let ((slot (assoc name benv)))
    (cond ((null slot) (ferror ???<unseen-block-name>))
	  ((eq (cadr slot) 'invalid) (ferror ???<block-name-no-longer-valid>))
	  (t (throw (cdr slot) (%eval form venv fenv benv genv))))))
!
(defmacro prog (vars &rest body)
  (do ((b body (cdr b))
       (decls '() (cons (car b) decls)))
      ((or (endp b)
	   (atom (car b))
	   (not (eq (caar b) 'declare)))
       `(let ,vars ,@(nreverse decls) (block nil (tagbody ,@b))))))

;;; The TAGBODY construct provides a body with GO tags in it.
;;; It is interpreted by first putting one entry onto GENV for
;;; every tag in the body; doing this ahead of time saves searching
;;; at GO time.  A unique cons whose car is NIL is constructed for
;;; use as a unique catch tag.  Then the body is executed.
;;; If a GO is interpreted, a throw occurs, sending as the thrown
;;; value the point in the body after the relevant tag.
;;; If the TAGBODY construct is exited for any reason (including
;;; falling off the end, which produces the value NIL), the car of
;;; the unique marker is clobbered to be INVALID, to indicate that
;;; tags associated with that marker are no longer valid.

(defspec tagbody (&rest body) (venv fenv benv genv)
  (do ((b body (cdr b))
       (marker (list nil)))
      ((endp p)
       (block exit
	 (unwind-protect
	  (loop (setq body
		      (catch marker
			(do ((b body (cdr b)))
			    ((endp b) (return-from exit nil))
			  (unless (atom (car b))
			    (%eval (car b) venv fenv benv genv))))))
	  (rplaca marker 'invalid))))
    (when (atom (car b))
      (push (list* (car b) marker (cdr b)) genv))))

(defspec go (tag) (venv fenv benv genv)
  (let ((slot (assoc tag genv)))
    (cond ((null slot) (ferror ???<unseen-go-tag>))
	  ((eq (caadr slot) 'invalid) (ferror ???<go-tag-no-longer-valid>))
	  (t (throw (cadr slot) (cddr slot))))))
-----------------------------------------------------------
;;; This version uses some special variables to avoid passing stuff around.

;;; This evaluator splits the lexical environment into four
;;; logically distinct entities:
;;;	VENV = lexical variable environment
;;;	FENV = lexical function and macro environment
;;;	BENV = block name environment
;;;	GENV = go tag environment
;;; Each environment is an a-list.  It is never the case that
;;; one can grow and another shrink simultaneously; the four
;;; parts could be united into a single a-list.  The four-part
;;; division saves consing and search time.
;;;
;;; In this implementation, the four environment parts are normally
;;; kept in four special variables %VENV%, %FENV%, %BENV%, and %GENV%.
;;; (These are internal to the implementation, and are not meant to
;;; be user-accessible.)

(defvar %venv% nil)
(defvar %fenv% nil)
(defvar %benv% nil)
(defvar %genv% nil)

;;; Each entry in VENV has one of two forms: (VAR VALUE) or (VAR).
;;; The first indicates a lexical binding of VAR to VALUE, and the
;;; second indicates a special binding of VAR (implying that the
;;; special value should be used).
;;;
;;; Each entry in FENV looks like (NAME TYPE . FN), where NAME is the
;;; functional name, TYPE is either FUNCTION or MACRO, and FN is the
;;; function or macro-expansion function, respectively.  Entries of
;;; type FUNCTION are made by FLET and LABELS; those of type MACRO
;;; are made by MACROLET.
;;;
;;; Each entry in BENV looks like (NAME NIL), where NAME is the name
;;; of the block.  The NIL is there primarily so that two distinct
;;; conses will be present, namely the entry and the cdr of the entry.
;;; These are used internal as catch tags, the first for RETURN and the
;;; second for RESTART.  If the NIL has been clobbered to be INVALID,
;;; then the block has been exited, and a return to that block is an error.
;;;
;;; Each entry in GENV looks like (TAG MARKER . BODY), where TAG is
;;; a go tag, MARKER is a unique cons used as a catch tag, and BODY
;;; is the statement sequence that follows the go tag.  If the car of
;;; MARKER, normally NIL, has been clobbered to be INVALID, then
;;; the tag body has been exited, and a go to that tag is an error.

;;; An interpreted-lexical-closure contains a function (normally a
;;; lambda-expression) and the lexical environment.

(defstruct interpreted-lexical-closure function venv fenv benv genv)


;;; The EVALHOOK feature allows a user-supplied function to be called
;;; whenever a form is to be evaluated.  The presence of the lexical
;;; environment requires an extension of the feature as it is defined
;;; in MacLISP.  Here, the user hook function must accept not only
;;; the form to be evaluated, but also the components of the lexical
;;; environment; these must then be passed verbatim to EVALHOOK or
;;; *EVAL in order to perform the evaluation of the form correctly.
;;; The precise number of components should perhaps be allowed to be
;;; implementation-dependent, so it is probably best to require the
;;; user hook function to accept arguments as (FORM &REST ENV) and
;;; then to perform evaluation by (APPLY #'EVALHOOK FORM HOOKFN ENV),
;;; for example.

(defvar evalhook nil)

(defun evalhook (exp hookfn %venv% %fenv% %benv% %genv%)
  (let ((evalhook hookfn))
	(%eval exp)))

(defun eval (exp)
  (*eval exp nil nil nil nil))

(defun *eval (exp %venv% %fenv% %benv% %genv%)
  (%eval exp))
!
;;; Function names beginning with "%" are intended to be internal
;;; and not defined in the Common LISP white pages.

;;; %EVAL is the main evaluation function.  It evaluates EXP in
;;; the current lexical environment, assumed to be in %VENV%, etc.

(defun %eval (exp)
  (if (not (null evalhook))
      (let ((hookfn evalhook) (evalhook nil))
	(funcall hookfn exp %venv% %fenv% %benv% %genv%))
      (typecase exp
	;; A symbol is first looked up in the lexical variable environment.
	(symbol (let ((slot (assoc exp %venv%)))
		  (cond ((and (not (null slot)) (not (null (cdr slot))))
			 (cadr slot))
			((boundp exp) (symbol-value exp))
			(t (cerror :unbound-variable
				   "The symbol ~S has no value"
				   exp)))))
	;; Numbers, string, and characters self-evaluate.
	((or number string character) exp)
	;; Conses require elaborate treatment based on the car.
	(cons (typecase (car exp)
		;; A symbol is first looked up in the lexical function environment.
		;; This lookup is cheap if the environment is empty, a common case.
		(symbol
		 (let ((fn (car exp)))
		   (loop (let ((slot (assoc fn %fenv%)))
			   (unless (null slot)
			     (return (case (cadr slot)
				       (macro (%eval (%macroexpand
						      (cddr slot)
						      (if (eq fn (car exp))
							  exp
							  (cons fn (cdr exp))))))
				       (function (%apply (cddr slot)
							 (%evlis (cdr exp))))
				       (t <implementation-error>)))))
			 ;; If not in lexical function environment,
			 ;;  try the definition cell of the symbol.
			 (when (fboundp fn)
			   (return (cond ((special-form-p fn)
					  (%invoke-special-form fn (cdr exp)))
					 ((macro-p fn)
					  (%eval (%macroexpand
						  (get-macro-function (symbol-function fn))
						  (if (eq fn (car exp))
						      exp
						      (cons fn (cdr exp))))))
					 (t (%apply (symbol-function fn)
						    (%evlis (cdr exp)))))))
			 (setq fn
			       (cerror :undefined-function
				       "The symbol ~S has no function definition"
				       fn))
			 (unless (symbolp fn)
			   (return (%apply fn (%evlis (cdr exp))))))))
		;; A cons in function position must be a lambda-expression.
		;; Note that the construction of a lexical closure is avoided here.
		(cons (%lambda-apply (car exp) (%evlis (cdr exp))))
		(t (%eval (cerror :invalid-form
				  "Cannot evaluate the form ~S: function position has invalid type ~S"
				  exp (type-of (car exp)))))))
	(t (%eval (cerror :invalid-form
			  "Cannot evaluate the form ~S: invalid type ~S"
			  exp (type-of exp)))))))
!
;;; Given a list of forms, evaluate each and return a list of results.

(defun %evlis (forms)
  (mapcar #'(lambda (form) (%eval form)) forms))

;;; Given a list of forms, evaluate each, discarding the results of
;;; all but the last, and returning all results from the last.

(defun %evprogn (body)
  (if (endp body) nil
      (do ((b body (cdr b)))
	  ((endp (cdr b))
	   (%eval (car b)))
	(%eval (car b)))))

;;; APPLY takes a function, a number of single arguments, and finally
;;; a list of all remaining arguments.  The following song and dance
;;; attempts to construct efficiently a list of all the arguments.

(defun apply (fn firstarg &rest args*)
  (%apply fn
	  (cond ((null args*) firstarg)
		((null (cdr args*)) (cons firstarg (car args*)))
		(t (do ((x args* (cdr x))
			(z (cddr args*) (cdr z)))
		       ((null z)
			(rplacd x (cadr x))
			(cons firstarg (car args*))))))))
!
;;; %APPLY does the real work of applying a function to a list of arguments.

(defun %apply (fn args)
  (typecase fn
    ;; For closures over dynamic variables, complex magic is required.
    (closure (with-closure-bindings-in-effect fn
					      (%apply (closure-function fn) args)))
    ;; For a compiled function, an implementation-dependent "spread"
    ;;  operation and invocation is required.
    (compiled-function (%invoke-compiled-function fn args))
    ;; The same goes for a compiled closure over lexical variables.
    (compiled-lexical-closure (%invoke-compiled-lexical-closure fn args))
    ;; The treatment of interpreted lexical closures is elucidated fully here.
    (interpreted-lexical-closure
     (let ((%venv% (interpreted-lexical-closure-venv fn))
	   (%fenv% (interpreted-lexical-closure-fenv fn))
	   (%benv% (interpreted-lexical-closure-benv fn))
	   (%genv% (interpreted-lexical-closure-genv fn)))
       (%lambda-apply (interpreted-lexical-closure-function fn) args)))
    ;; For a symbol, the function definition is used, if it is a function.
    (symbol (%apply (cond ((not (fboundp fn))
			   (cerror :undefined-function
				   "The symbol ~S has no function definition"
				   fn))
			  ((special-form-p fn)
			   (cerror :invalid-function
				   "The symbol ~S cannot be applied: it names a special form"
				   fn))
			  ((macro-p fn)
			   (cerror :invalid-function
				   "The symbol ~S cannot be applied: it names a macro"
				   fn))
			  (t (symbol-function fn)))
		    args))
    ;; Applying a raw lambda-expression uses the null lexical environment.
    (cons (if (eq (car fn) 'lambda)
	      (let ((%venv% nil) (%fenv% nil) (%benv% nil) (%genv% nil))
		(%lambda-apply fn args))
	      (%apply (cerror :invalid-function
			      "~S is not a valid function"
			      fn)
		      args)))
    (t (%apply (cerror :invalid function
		       "~S has an invalid type ~S for a function"
		       fn (type-of fn))
	       args))))
!
;;; %LAMBDA-APPLY is the hairy part, that takes care of applying
;;; a lambda-expression in a given lexical environment to given
;;; arguments.  The complexity arises primarily from the processing
;;; of the parameter list.
;;;
;;; If at any point the lambda-expression is found to be malformed
;;; (typically because of an invalid parameter list), or if the list
;;; of arguments is not suitable for the lambda-expression, a correctable
;;; error is signalled; correction causes a throw to be performed to
;;; the tag %LAMBDA-APPLY-RETRY, passing back a (possibly new)
;;; lambda-expression and a (possibly new) list of arguments.
;;; The application is then retried.  If the new lambda-expression
;;; is not really a lambda-expression, then %APPLY is used instead of
;;; %LAMBDA-APPLY.
;;;
;;; In this evaluator, PROGV is used to instantiate variable bindings
;;; (though its use is embedded with a macro called %BIND-VAR).
;;; The throw that precedes a retry will cause special bindings to
;;; be popped before the retry.

(defun %lambda-apply (lexp args)
  (multiple-value-bind (newfn newargs)
		       (catch '%lambda-apply-retry
			 (return-from %lambda-apply
			   (let ((%venv% %venv%))
			     (%lambda-apply-1 lexp args))))
    (if (and (consp lexp) (eq (car lexp) 'lambda))
	(%lambda-apply newfn newargs)
	(%apply newfn newargs))))

;;; Calling this function will unwind all special variables
;;; and cause FN to be applied to ARGS in the original lexical
;;; and dynamic environment in force when %LAMBDA-APPLY was called.

(defun %lambda-apply-retry (fn args)
  (throw '%lambda-apply-retry (values fn args)))

;;; This function is convenient when the lambda expression is found
;;; to be malformed.  REASON should be a string explaining the problem.

(defun %bad-lambda-exp (lexp oldargs reason)
  (%lambda-apply-retry
   (cerror :invalid-function
	   "Improperly formed lambda-expression ~S: ~A"
	   lexp reason)
   oldargs))

;;; (%BIND-VAR VAR VALUE . BODY) evaluates VAR to produce a symbol name
;;; and VALUE to produce a value.  If VAR is determined to have been
;;; declared special (as indicated by the current binding of the variable
;;; SPECIALS, which should be a list of symbols, or by a SPECIAL property),
;;; then a special binding is established using PROGV.  Otherwise an
;;; entry is pushed onto the a-list presumed to be in the variable VENV.

(defmacro %bind-var (var value &body body)
  `(let ((var ,var) (value ,value))
     (let ((specp (or (member var specials) (get var 'special))))
       (progv (and specp (list var)) (and specp (list value))
	 (push (if specp (list var) (list var value)) %venv%)
	 ,@body))))

;;; %LAMBDA-KEYWORD-P is true iff X (which must be a symbol)
;;; has a name beginning with an ampersand.

(defun %lambda-keyword-p (x)
  (char= #\& (char 0 (symbol-pname x))))
!
;;; %LAMBDA-APPLY-1 is responsible for verifying that LEXP is
;;; a lambda-expression, for extracting a list of all variables
;;; declared SPECIAL in DECLARE forms, and for finding the
;;; body that follows any DECLARE forms.

(defun %lambda-apply-1 (lexp args)
  (cond ((or (not (consp lexp))
	     (not (eq (car lexp) 'lambda))
	     (atom (cdr lexp))
	     (not (listp (cadr lexp))))
	 (%bad-lambda-exp lexp args "improper lambda or lambda-list"))
	(t (do ((body (cddr lexp) (cdr body))
		(specials '()))
	       ((or (endp body)
		    (not (listp (car body)))
		    (not (eq (caar body) 'declare)))
		(%bind-required lexp args (cadr lexp) args specials body))
	     (dolist (decl (cdar body))
	       (when (eq (car decl) 'special)
		 (setq specials
		       (if (null specials)		;Avoid consing
			   (cdar decl)
			   (append (cdar decl) specials)))))))))

;;; %BIND-REQUIRED handles the pairing of arguments to required parameters.
;;; Error checking is performed for too few or too many arguments.
;;; If a lambda-list keyword is found, %TRY-OPTIONAL is called.
;;; Here, as elsewhere, if the binding process terminates satisfactorily
;;; then the body is evaluated using %EVPROGN in the newly constructed
;;; dynamic and lexical environment.

(defun %bind-required (lexp oldargs varlist args specials body)
  (cond ((endp varlist)
	 (if (null args)
	     (%evprogn body)
	     (%lambda-apply-retry lexp
				  (cerror :too-many-arguments
					  "Too many arguments for function ~S: ~S"
					  lexp args))))
	((not (symbolp (car varlist)))
	 (%bad-lambda-exp lexp oldargs "required parameter name not a symbol"))
	((%lambda-keyword-p (car varlist))
	 (%try-optional lexp oldargs varlist args specials body))
	((null args)
	 (%lambda-apply-retry lexp 
			      (cerror :too-few-arguments
				      "Too few arguments for function ~S: ~S"
				      lexp oldargs)))
	  (t (%bind-var (car varlist) (car args)
			(%bind-required lexp oldargs varlist (cdr args) specials body)))))
!
;;; %TRY-OPTIONAL determines whether the lambda-list keyword &OPTIONAL
;;; has been found.  If so, optional parameters are processed; if not,
;;; the buck is passed to %TRY-REST.

(defun %try-optional (lexp oldargs varlist args specials body)
  (cond ((eq (car varlist) '&optional)
	 (%bind-optional lexp oldargs (cdr varlist) args specials body))
	(t (%try-rest lexp oldargs varlist args specials body))))

;;; %BIND-OPTIONAL determines whether the parameter list is exhausted.
;;; If not, it parses the next specifier.

(defun %bind-optional (lexp oldargs varlist args specials body)
  (cond ((endp varlist)
	 (if (null args)
	     (%evprogn body)
	     (%lambda-apply-retry lexp
				  (cerror :too-many-arguments
					  "Too many arguments for function ~S: ~S"
					  lexp args))))
	(t (let ((varspec (car varlist)))
	     (cond ((symbolp varspec)
		    (if (%lambda-keyword-p varspec)
			(%try-rest lexp oldargs varlist args specials body)
			(%process-optional lexp oldargs varlist args specials body varspec nil nil)))
		   ((and (consp varspec)
			 (symbolp (car varspec))
			 (listp (cdr varspec))
			 (or (endp (cddr varspec))
			     (and (symbolp (caddr varspec))
				  (not (endp (caddr varspec)))
				  (endp (cdddr varspec)))))
		    (%process-optional lexp oldargs varlist args specials body
				       (car varspec)
				       (cadr varspec)
				       (caddr varspec)))
		   (t (%bad-lambda-exp lexp oldargs "malformed optional parameter specifier")))))))

;;; %PROCESS-OPTIONAL takes care of binding the parameter,
;;; and also the supplied-p variable, if any.

(defun %process-optional (lexp oldargs varlist args specials body var init varp)
  (let ((value (if (null args) (%eval init) (car args))))
    (%bind-var var value
      (if varp
	  (%bind-var varp (not (null args))
	    (%bind-optional lexp oldargs varlist args specials body))
	  (%bind-optional lexp oldargs varlist args specials body)))))
!
;;; %TRY-REST determines whether the lambda-list keyword &REST
;;; has been found.  If so, the rest parameter is processed;
;;; if not, the buck is passed to %TRY-KEY, after a check for
;;; too many arguments.

(defun %try-rest (lexp oldargs varlist args specials body)
  (cond ((eq (car varlist) '&rest)
	 (%bind-rest lexp oldargs (cdr varlist) args specials body))
	((and (not (eq (car varlist) '&key))
	      (not (null args)))
	 (%lambda-apply-retry lexp
			      (cerror :too-many-arguments
				      "Too many arguments for function ~S: ~S"
				      lexp args)))
	(t (%try-key lexp oldargs varlist args specials body))))

;;; %BIND-REST ensures that there is a parameter specifier for
;;; the &REST parameter, binds it, and then evaluates the body or
;;; calls %TRY-KEY.

(defun %bind-rest (lexp oldargs varlist args specials body)
  (cond ((or (endp varlist)
	     (not (symbolp (car varlist))))
	 (%bad-lambda-exp lexp oldargs "missing rest parameter specifier"))
	(t (%bind-var (car varlist) args
	     (cond ((endp (cdr varlist))
		    (%evprogn body))
		   ((and (symbolp (cadr varlist))
			 (%lambda-keyword-p (cadr varlist)))
		    (%try-key lexp oldargs varlist args specials body))
		   (t (%bad-lambda-exp lexp oldargs "malformed after rest parameter specifier")))))))
!
;;; %TRY-KEY determines whether the lambda-list keyword &KEY
;;; has been found.  If so, keyword parameters are processed;
;;; if not, the buck is passed to %TRY-AUX.

(defun %try-key (lexp oldargs varlist args specials body)
  (cond ((eq (car varlist) '&key)
	 (%bind-key lexp oldargs (cdr varlist) args specials body nil))
	(t (%try-aux lexp oldargs varlist specials body))))

;;; %BIND-KEY determines whether the parameter list is exhausted.
;;; If not, it parses the next specifier.

(defun %bind-key (lexp oldargs varlist args specials body keys)
  (cond ((endp varlist)
	 ;; Optional error check for bad keywords.
	 (do ((a args (cddr a)))
	     ((endp args))
	   (unless (member (car a) keys)
	     (cerror :unexpected-keyword
		     "Keyword not expected by function ~S: ~S"
		     lexp (car a))))
	 (%evprogn body))
	(t (let ((varspec (car varlist)))
	     (cond ((symbolp varspec)
		    (if (%lambda-keyword-p varspec)
			(cond ((not (eq varspec '&allow-other-keywords))
			       (%try-aux lexp oldargs varlist specials body))
			      ((endp (cdr varlist))
			       (%evprogn body))
			      ((%lambda-keyword-p (cadr varlist))
			       (%try-aux lexp oldargs (cdr varlist) specials body))
			      (t (%bad-lambda-exp lexp oldargs "invalid after &ALLOW-OTHER-KEYWORDS")))
			(%process-key lexp oldargs varlist args specials body keys
				      (intern varspec keyword-package)
				      varspec nil nil)))
		   ((and (consp varspec)
			 (or (symbolp (car varspec))
			     (and (consp (car varspec))
				  (consp (cdar varspec))
				  (symbolp (cadar varspec))
				  (endp (cddar varspec))))
			 (listp (cdr varspec))
			 (or (endp (cddr varspec))
			     (and (symbolp (caddr varspec))
				  (not (endp (caddr varspec)))
				  (endp (cdddr varspec)))))
		    (%process-key lexp oldargs varlist args specials body keys
				  (if (consp (car varspec))
				      (caar varspec)
				      (intern (car varspec) keyword-package))
				  (if (consp (car varspec))
				      (cadar varspec)
				      (car varspec))
				  (cadr varspec)
				  (caddr varspec)))
		   (t (%bad-lambda-exp lexp oldargs "malformed keyword parameter specifier")))))))

;;; %PROCESS-KEY takes care of binding the parameter,
;;; and also the supplied-p variable, if any.

(defun %process-key (lexp oldargs varlist args specials body keys kwd var init varp)
  (let ((value (do ((a args (cddr a)))
		   ((endp a) (%eval init))
		 (when (eq (car a) kwd)
		   (return (cadr a))))))
    (%bind-var var value
      (if varp
	  (%bind-var varp (not (null args))
	    (%bind-key lexp oldargs varlist args specials body (cons kwd keys)))
	  (%bind-key lexp oldargs varlist args specials body (cons kwd keys))))))
!
;;; %TRY-AUX determines whether the keyword &AUX
;;; has been found.  If so, auxiliary variables are processed;
;;; if not, an error is signalled.

(defun %try-aux (lexp oldargs varlist specials body)
  (cond ((eq (car varlist) '&aux)
	 (%bind-aux lexp oldargs (cdr varlist) specials body))
	(t (%bad-lambda-exp lexp oldargs "unknown or misplaced lambda-list keyword"))))

;;; %BIND-AUX determines whether the parameter list is exhausted.
;;; If not, it parses the next specifier.

(defun %bind-aux (lexp oldargs varlist specials body)
  (cond ((endp varlist)
	 (%evprogn body))
	(t (let ((varspec (car varlist)))
	     (cond ((symbolp varspec)
		    (if (%lambda-keyword-p varspec)
			(%bad-lambda-exp lexp oldargs "unknown or misplaced lambda-list keyword")
			(%process-aux lexp oldargs varlist specials body varspec nil)))
		   ((and (consp varspec)
			 (symbolp (car varspec))
			 (listp (cdr varspec))
			 (endp (cddr varspec)))
		    (%process-aux lexp oldargs varlist specials body
				       (car varspec)
				       (cadr varspec)))
		   (t (%bad-lambda-exp lexp oldargs "malformed aux variable specifier")))))))

;;; %PROCESS-AUX takes care of binding the auxiliary variable.

(defun %process-aux (lexp oldargs varlist specials body var init)
    (%bind-var var (and init (%eval init))
       (%bind-aux lexp oldargs varlist specials body)))
!
;;; Definitions for various special forms and macros.

(defspec quote (obj) obj)

(defspec function (fn)
  (cond ((consp fn)
	 (cond ((eq (car fn) 'lambda)
		(make-interpreted-closure :function fn :venv %venv% :fenv %fenv% :benv %benv% :genv %genv%))
	       (t (cerror ???))))
	((symbolp fn)
	 (loop (let ((slot (assoc fn %fenv%)))
		 (unless (null slot)
		   (case (cadr slot)
		     (macro (cerror ???))
		     (function (return (cddr slot)))
		     (t <implementation-error>))))
	       (when (fboundp fn)
		 (cond ((or (special-form-p fn) (macro-p fn))
			(cerror ???))
		       (t (return (symbol-function fn)))))
	       (setq fn (cerror :undefined-function
				"The symbol ~S has no function definition"
				fn))
	       (unless (symbolp fn) (return fn))))
	(t (cerror ???))))

(defspec if (pred con &optional alt)
  (if (%eval pred) (%eval con) (%eval alt)))

;;; The BLOCK construct provides a PROGN with a named contour around it.
;;; It is interpreted by first putting an entry onto BENV, consisting
;;; of a 2-list of the name and NIL.  This provides two unique conses
;;; for use as catch tags.  Then the body is executed.
;;; If a RETURN or RESTART is interpreted, a throw occurs.  If the BLOCK
;;; construct is exited for any reason (including falling off the end, which
;;; retu rns the results of evaluating the last form in the body), the NIL in
;;; the entry is clobbered to be INVALID, to indicate that that particular
;;; entry is no longer valid for RETURN or RESTART.

(defspec block (name &body body)
  (let ((slot (list name nil)))	;Use slot for return, (cdr slot) for restart
    (unwind-protect
     (catch slot
       (block exit
	 (loop (catch (cdr slot)
		 (return-from exit
		   (let ((%benv% (cons slot %benv%)))
		     (%evprogn body)))))))
     (rplaca (cdr slot) 'invalid)))) 

(defspec return (form)
  (let ((slot (assoc nil %benv%)))
    (cond ((null slot) (ferror ???<unseen-block-name>))
	  ((eq (cadr slot) 'invalid) (ferror ???<block-name-no-longer-valid>))
	  (t (throw slot (%eval form))))))

(defspec return-from (name form)
  (let ((slot (assoc name %benv%)))
    (cond ((null slot) (ferror ???<unseen-block-name>))
	  ((eq (cadr slot) 'invalid) (ferror ???<block-name-no-longer-valid>))
	  (t (throw slot (%eval form))))))

(defspec restart (form)
  (let ((slot (assoc nil %benv%)))
    (cond ((null slot) (ferror ???<unseen-block-name>))
	  ((eq (cadr slot) 'invalid) (ferror ???<block-name-no-longer-valid>))
	  (t (throw (cdr slot) (%eval form))))))

(defspec restart-from (name form)
  (let ((slot (assoc name %benv%)))
    (cond ((null slot) (ferror ???<unseen-block-name>))
	  ((eq (cadr slot) 'invalid) (ferror ???<block-name-no-longer-valid>))
	  (t (throw (cdr slot) (%eval form))))))
!
(defmacro prog (vars &rest body)
  (do ((b body (cdr b))
       (decls '() (cons (car b) decls)))
      ((or (endp b)
	   (atom (car b))
	   (not (eq (caar b) 'declare)))
       `(let ,vars ,@(nreverse decls) (block nil (tagbody ,@b))))))

;;; The TAGBODY construct provides a body with GO tags in it.
;;; It is interpreted by first putting one entry onto GENV for
;;; every tag in the body; doing this ahead of time saves searching
;;; at GO time.  A unique cons whose car is NIL is constructed for
;;; use as a unique catch tag.  Then the body is executed.
;;; If a GO is interpreted, a throw occurs, sending as the thrown
;;; value the point in the body after the relevant tag.
;;; If the TAGBODY construct is exited for any reason (including
;;; falling off the end, which produces the value NIL), the car of
;;; the unique marker is clobbered to be INVALID, to indicate that
;;; tags associated with that marker are no longer valid.

(defspec tagbody (&rest body)
  (let ((%genv% %genv%))
    (do ((b body (cdr b))
	 (marker (list nil)))
	((endp p)
	 (block exit
	   (unwind-protect
	    (loop (setq body
			(catch marker
			  (do ((b body (cdr b)))
			      ((endp b) (return-from exit nil))
			    (unless (atom (car b))
			      (%eval (car b)))))))
	    (rplaca marker 'invalid))))
      (when (atom (car b))
	(push (list* (car b) marker (cdr b)) %genv%)))))

(defspec go (tag)
  (let ((slot (assoc tag %genv%)))
    (cond ((null slot) (ferror ???<unseen-go-tag>))
	  ((eq (caadr slot) 'invalid) (ferror ???<go-tag-no-longer-valid>))
	  (t (throw (cadr slot) (cddr slot))))))
-------

∂26-Sep-82  2231	Kent M. Pitman <KMP at MIT-MC> 	Indeed, one of us must be confused.   
Date: 27 September 1982 01:08-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject:  Indeed, one of us must be confused.
To: Fahlman at CMU-10A
cc: Common-Lisp at SU-AI

No, I think I understood fully what you meant. Nothing in your reply is in
conflict with my original message. I suggest that it is you who misunderstood
me... My turn now to be less vague.

In any case like the LispM where vectors aren't there, then there would be
cases of the code in this form:

    (COND ((VECTORP obj) ...branch X...)
	  (T ...branch Y...))

Now, it may in some sense seem arbitrary whether branch X or Y is taken, so
long as it's predicatable, but I argue that there are good reasons why someone
should want it to take branch Y in the case where vectors are not being
represented.

The first is that on the LispM there really will not be any vectors and that
intuitively the right thing is for nothing to claim to be a vector. I think
the other arguments follow from this but let me lay them out.

The next reason is that one cannot debug branch X fully on a system which
does not have vectors. You can write code like:

    (COND ((VECTORP X)
	   ...simple code which needn't know about fill pointers...)
	  (T
	   ...code which might hack fill pointers if they exist...))

For some applications, this code will function incorrectly on an X which
is 1-D but has a fill pointer because it may be relevant to some kinds
of computation that X does have a fill pointer. Consider:

    (DEFUN SET-STREAM-RUBOUT-HANDLER-BUFFER (STREAM BUFFER)
      (COND ((VECTORP BUFFER)
	     (ERROR "Buffer must have all the hairy features of arrays!"))
	    (T
	     (SET-STREAM-RUBOUT-HANDLER-BUFFER-INTERNAL STREAM BUFFER))))

This is a useful error check to put in portable code and would be
thwarted by VECTORP returning T on non-vectors. Further, one -could- at
least debug branch Y correctly because it would have to worry about the
case of fill pointers. It might have to go to more work than was needed
on a few inputs, but there are no cases where I think a drastically
wrong thing would happen. The reason for this is clear:  Even in systems
with VECTORs, it is possible to construct 1-D arrays which have all the
properties of VECTORs except VECTORness itself.

Hence, in an environment like the LispM, this is exactly what you have.
A thing with all a VECTOR's properties except one -- VECTORness. And since
you can't reliably tell what was made with a VECTOR property and what wasn't,
the right thing is to make the default in the safe direction, which I claim
is to assume they are not the simple case unless you have proof.

-kmp

ps Note that there is a third alternative which is to require that 
   implementations not supporting vectors at least support something 
   which is identifiable as a VECTOR. This could be done by storing a 
   special marker in an array leader, by creating a hash table of all
   vectors, or whatever. I do not advocate this idea, but I do point it out.

∂27-Sep-82  0031	Alan Bawden <ALAN at MIT-MC> 	What is this RESTART kludge?  
Date: 27 September 1982 03:29-EDT
From: Alan Bawden <ALAN at MIT-MC>
Subject: What is this RESTART kludge?
To: Common-Lisp at SU-AI
cc: ALAN at MIT-MC

First a simple question.  What is the subform in a RESTART form good for?
According to the interpreter, a RESTART form contains a subform that gets
evaluated before the restart happens.  Its value is carefully thrown back to
the matching BLOCK, and is then ignored.  My current best theory is that it is
simply a spaz on GLS's part, perhaps because he simply copied the code for
RETURN.  For the rest of this message I am going to assume that this is the
case and that this subform isn't really there.

Also in the recent evaluators there are two RESTART special forms.  RESTART and
RESTART-FROM.  Where RESTART could be a macro expanding as: (RESTART) ==>
(RESTART-FROM NIL).

According to Moon's notes, the decision to have this RESTART form was made at
the last meeting, and there it was agreed that the syntax would be:  
(RESTART [block-name]) presumably with the block-name defaulting to NIL.  At
the very least I would prefer this to having the additional RETURN-FROM form.
[Although perhaps this is evidence that GLS really intends there to be a
gratuitious subform in a RESTART?]

A somewhat larger complaint I have is that having (RESTART) mean to restart a
block named NIL a really BAD idea.  Blocks named NIL are frequently produced by
macros (like LOOP, DOTIMES, etc.) that would do something totally bogus if you
were to try to restart them.  A block named NIL means an iteration, and the
fact that you can RETURN from it is a convenience because you frequently want
to exit a loop in an arbitrary manner, but restarting an iteration is not
something that needs to be made convenient.  Also I really dread the thought of
re-writing all the macros in the world that use DO or PROG to allow for some
loser restarting them.  If we must have RESTART, then it should ALWAYS take a
block name.

Finally, I really don't understand why we need to clutter up the language with
this RESTART thing in the first place.  Why should I prefer
(BLOCK FOO ... (RESTART FOO) ...) over (TAGBODY FOO ... (GO FOO) ...)?  I have
this sneaking suspicion that it has something to do with gotophobia, which is
silly.

It seems that we have nicely split the actions of PROG into three simpler
special forms, and then we are repeating the mistake we made in overloading
PROG by adding a new kludge to BLOCK.  Wouldn't the following do just as well
as a private macro for people who like their code to look this way?

(defmacro restartable (name &body body) `(tagbody ,name (progn ,@body)))

(defmacro restart (name) `(go ,name))


Alternatives:  [in order of decreasing desirability in my opinion]

A)  Flush it.  [Yeah!]

B)  Have a fifth distinct environment containing restart block names, and a new
special form to introduce it (similar to the RESTARTABLE macro defined above).
In this case it wouldn't be totally unreasonable to have (RESTART) mean
(RESTART NIL), since it wouldn't interact with BLOCKs at all.  [In other words,
if we have to have it, lets do it right.]

C)  Install the RESTART and RESTARTABLE macros I defined above.  [If we can't
do it right, lets build knowledge of it into as few places as possible.]

D)  Keep the restart block namespace the same as the return block namespace,
but specify that a block name must ALWAYS be given to RESTART.  [At least let's
not have it be a shaft.]

E)  Keep things as they are now except perhaps for clarifying the bit about the
random subform and the need for both RESTART and RESTART-FROM.  [The very lesat
we can do.]

∂27-Sep-82  1848	Scott E. Fahlman <Fahlman at Cmu-20c> 	Indeed, one of us must be confused. 
Date: Monday, 27 September 1982  21:49-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Kent M. Pitman <KMP at MIT-MC>
Cc:   Common-Lisp at SU-AI
Subject: Indeed, one of us must be confused.


KMP suggests that "on the LispM there really will not be any vectors and
that intuitively the right thing is for nothing to claim to be a
vector."  All I can say is that my intuition differs from his on this
issue.  I think it is perfectly intuitive to think of vectors on some
implementations as being limited creatures, and that the LispM is a
superset in that its vectors can do some extra tricks.  In KMP's
proposal, it is the implementations that "support vectors" that are the
superset -- that seems strange to me.

As KMP points out, you cannot debug protable Common Lisp code completely
on a system that provides a superset of the Common Lisp functionality,
unless a degraded "compatibility-mode" is provided or perhaps some sort
of portability checker in the compiler.  This is not a new sitution --
it occurs wherever Common Lisp has stopped short of providing the full
Zetalisp features.  There are many such cases already, and this one
doesn't bother me.

So the "RPG memorial" proposal still looks OK to me, as it is.

∂27-Sep-82  2014	Scott E. Fahlman <Fahlman at Cmu-20c> 	What is this RESTART kludge?   
Date: Monday, 27 September 1982  23:14-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Alan Bawden <ALAN at MIT-MC>
Cc:   Common-Lisp at SU-AI
Subject: What is this RESTART kludge?


I agree with Alan Bawden that RESTART should not have a value-returning
subform, and that RESTART-FROM is silly.  It is probably also OK to
flush (RESTART NIL) and require a non-null block-name.

The principal use that I see for RESTART is to allow certain forms to
conveniently restart a function from the top, making use of the implicit
named block around each defun body.  For example, one might test whether
an argument is of the proper type or meets some other criterion and, if
not, signal the problem with CERROR, asking for a new argument.  If the
user returns one, it is nice to be able to fix the arg in question, then
restart the function.  This will not work if the code has changed the
value of other argument variables, but it works in most cases.

It is true that this could be done by changing the function's body into
a PROG and placing a tag at the start (not forgetting to add a RETURN in
the proper place), but this is a pain; I find that this is just enough
extra hassle to keep me from doing the right thing with a CERROR in some
cases.  Having RESTART around would cause me (and, I bet, other lazy
slobs) to write somewhat better code in these cases, and the RESTART is
more self-explanatory than the equivalent PROG/TAG/GO would be.  I don't
think this is blind gotophobia; on the other hand, if RESTART were
flushed, I could probably live with that.  I just think it's a nice
minor convenience and worth the small additional clutter.

By the way, are we converging toward the name TAGBODY?  I much prefer
PROGBODY for this use.

-- Scott

∂27-Sep-82  2106	Guy.Steele at CMU-10A 	RESTART and TAGBODY   
Date: 28 September 1982 0007-EDT (Tuesday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject: RESTART and TAGBODY

Yes, indeed, I'm the spazzer to blame for that form that RESTART takes
in the sample interpreter, and it did happen by improper copying of the code
for RETURN.  Sigh.  I made it into RESTART and RESTART-FROM to parallel
RETURN and RETURN-FROM, but there seems to be universal opposition to this,
so I'll rename RESTART-FROM => RESTART if it gets kept.

As for TAGBODY versus PROGBODY, it seemed to me that the most important
characteristic of it was not that it is used to implement PROG (which
is the oldest but by no means most important construct to have such a body),
but rather that it bore tags; that is, I chose a name that described it
intrinsically rather than extrinsically.  I'm not passionate about this.
--Guy

∂28-Sep-82  0601	DLW at MIT-MC 	Arrays and vectors (again)    
Date: Tuesday, 28 September 1982  08:59-EDT
Sender: DLW at MIT-OZ
From: DLW at MIT-MC
To:   Scott E. Fahlman <Fahlman at Cmu-20c>
Cc:   common-lisp at SU-AI
Subject: Arrays and vectors (again)

Well, the "RPG memorial" as it stands says that only strings can
be used as inputs to the special "string-specific" functions, but
your reply to me says that actually even non-simple char-arrays
can be used as inputs to the string functions.  So I guess then
the name "string" functions isn't very good under this proposal,
since these functions aren't really limited to strings, since
all strings are vectors but you are telling that you can indeed
pass char-arrays-with-fill-pointers to them.  What I am saying
is that your new proposed nomenclature is confusing in this
regard.

I admit that this isn't as bad as what the "RPG memorial" message
seemed to say (that only strings worked with the string functions),
but it still seems sort of suboptimal.

Also, what does stringp do?  Does it ever return t for anything with a fill
pointer?  If so, then stringp returns t for things that are not strings,
which seems unacceptable; if not, then stringp will return NIL for some
things that print exactly the same way that strings print, and otherwise
behave very similarly, which seems undesirable.

I guess my opinion is that I am opposed although not fatally opposed.
-------

∂28-Sep-82  0614	DLW at MIT-MC 	What is this RESTART kludge?  
Date: Tuesday, 28 September 1982  09:03-EDT
Sender: DLW at MIT-OZ
From: DLW at MIT-MC
To:   Alan Bawden <ALAN at MIT-MC>
Cc:   Common-Lisp at SU-AI
Subject: What is this RESTART kludge?

I agree that a required tag and no subform is the right thing.

I like RESTART because it is an extremely clear way of signalling
my intentions, whereas PROG/PROGBODY/TAGBODY is not.  It therefore
makes my code easier to read.

It's OK with me if it's implemented as a macro that turns into
a PROGBODY/TAGBODY internally.
-------

∂28-Sep-82  0616	DLW at MIT-MC 	Indeed, one of us must be confused.
Date: Tuesday, 28 September 1982  09:08-EDT
Sender: DLW at MIT-OZ
From: DLW at MIT-MC
To:   Scott E. Fahlman <Fahlman at Cmu-20c>
Cc:   Common-Lisp at SU-AI, Kent M. Pitman <KMP at MIT-MC>
Subject: Indeed, one of us must be confused.

Well, I think the problem that is really at the heart of what
KMP is talking about is the same as something I brought up
at the meeting regarding TYPEP.  KMP wants a function that
asks "can this very array support this optional feature",
whereas what he is given is a function that asks "might this
feature be supported by some implementation given something
of this 'type'?".  He wants a way to ask whether a given
array object really cannot do certain operations.  This is
a useful thing, but it is not provided mainly because C.L.
spends more time worrying about how to make the differences
invisible when they ought to be invisible, rather than how to
make them visible when they are visible.  In the rubout-handler
example, VECTORP shouldn't be used; there should be a new function
or set of functions to test the array and report whether it can
or cannot handle the particular feature in question.  VECTORP
is really wrong since, in the way KMP is trying to use it,
it tests the AND of several unrelated hairy array features;
using a specific feature tester predicate would be more powerful,
clear, and useful.
-------
∂28-Sep-82  0421	KMP at MIT-MC  
Date: Tuesday, 28 September 1982  07:15-EDT
Sender: KMP at MIT-OZ
From: KMP at MIT-MC
To:   Guy.Steele at CMUA
cc:   EAK at MC, DLW at SCRC, RPG at SAIL, FAHLMAN at CMUC, KMP at MC,
      GSB at ML

Here are my notes on your proposed evaluator. I've not CC'd them to everyone
on the list since I think it's likely that most people will not be much 
interested in so detailed a study. Maybe RPG or Fahlman or whoever maintains
such information could take care of entering these notes in the Common-Lisp 
archives just so they don't get lost...Thanks. -kmp

-----Begin Notes-----
Rather than pass back code, I will make texty comments and try to "convince
you" that my fixes are right. That will help make sure you doublecheck my
results... I have avoided putting whole function descriptions in here so that
if I had more than one note about a function, you can accept one point without
accepting all of them in cases where I may have erred.

!
Bugs
----

%BIND-REQUIRED's first call to CERROR uses ARGS where it means to use OLDARGS.
%BIND-OPTIONAL and %TRY-REST also have this problem in their calls to CERROR.

%BIND-REQUIRED calls itself recursively in the T clause without stepping
VARLIST. It wants to say 
  ... (T (%BIND-VAR (CAR VARLIST) (CAR ARGS)
		    (%BIND-REQUIRED LEXP OLDARGS (CDR VARLIST) ...))) ...
The ARGS variable is correctly stepped here.

Neither %BIND-OPTIONAL nor %PROCESS-OPTIONAL steps VARLIST. I think 
%PROCESS-OPTIONAL should be the one to do this since ARGS also wants to
be stepped %PROCESS-OPTIONAL (this is also not done in your version) and
the stepping might as well happen together. So that would look like
 (%BIND-OPTIONAL LEXP OLDARGS (CDR VARLIST) OLDVENV FENV BENV GENV VENV
		 (CDR ARGS) SPECIALS BODY)
for both recursive calls in that function.

In the T clause of %BIND-REST, %TRY-KEY should be called with an arg of
(CDR VARLIST), not VARLIST, since (CADR VARLIST) is what has been determined
to be a key. [You didn't step ARGS at this point, which is the right thing.]

%BIND-KEY does not catch unexpected keywords if the varlist doesn't run out
because it hits an &AUX. ie, the only place that checks for unexpected 
keywords is looking for (ENDP VARLIST). I think you want to factor out the
code that does the bad keyword loop to a separate function and call it also
from the middle of the next clause of the COND something like:
	(IF (%LAMBDA-KEYWORD-P VARSPEC)
	    (COND ((NOT (EQ VARSPEC '&ALLOW-OTHER-KEYWORDS))
	           ...test here for bad keywords...		; <--- HERE
	           (%TRY-AUX LEXP ...))
		  ((ENDP (CDR VARLIST)) ..) ...etc.))

%BIND-KEY should not be calling (INTERN VARSPEC KEYWORD-PACKAGE). It has to
do (INTERN (GET-PNAME VARSPEC) KEYWORD-PACKAGE) -- or whatever your way of
getting pnames is. If you INTERN a symbol on the keyword package, you're
going to screw up the keyword package. Only strings should ever be interned
there.

Neither %BIND-KEY nor %PROCESS-KEY steps VARLIST. They should. For 
consistency with %PROCESS-OPTIONAL, it's probably best to step it in 
%PROCESS-KEY. [Note however that ARGS should not be stepped. You have done
this correctly.] So both calls to %BIND-KEY in %PROCESS-KEY look like:
 (%BIND-KEY LEXP OLDARGS (CDR VARLIST) OLDVENV FENV BENV GENV VENV 
	    ARGS SPECIALS BODY (CONS KWD KEYS))

%PROCESS-KEY does the VARP computation completely incorrectly. It cannot tell
if VAR was given by doing (NOT (NULL ARGS)). It needs to figure out if the
var was supplied while in the DO loop searching for it. Probably the shape
of the overall function should be:
  (DEFUN %PROCESS-KEY (LEXP ...)
    (LET ((VARP-VALUE NIL))
      (LET ((VALUE (DO (...) (...)
		     (WHEN (EQ (CAR A) KWD)
			   (SETQ VARP-VALUE T)
			   (RETURN (CADR A))))))
	(%BIND-VAR ... (IF VARP (%BIND-VAR VARP VARP-VALUE ...) ...)))))

Please check carefully over the references to ARGLIST and ARGS after making
the changes I suggest and verify that they are always being stepped 
appropriately in case I have missed something.

!
Misfeatures
-----------

Near line 48 of %EVAL, you do 
  (SETQ FN (CERROR :UNDEFINED-FUNCTION "The symbol ~S .." FN))
  (UNLESS (SYMBOLP FN)
    (RETURN (%APPLY FN ...)))
I understand why you split the two cases, but it still bothered me that
there were these two seemingly arbitrary paths to take. In thinking about
it more, I have isolated some specific reasons that this bothers me.
It seemed like there ought to be a uniform handling of all vals coming
back from this undefined function thing so that it was clearly defined
what kind of values could be returned. eg,

 If you return a symbol here, it retries the symbol evaluation in the
	lexical environment.

 If you return a lambda-expression here, it is NOT retried in the 
	lexical environment. This is a bug, really. %APPLY doesn't take an
	environment spec, so has no choice but to run the lambda in the global
	environment.

 If you return a bad functional object (eg, the number 3), you will pass it
	to %APPLY which will then run a continuable error again, but which
	if continued with a symbol result will close that symbol in the
	global environment, NOT in the lexical environment.

I suggest that the right solution to this problem is something like that
%APPLY should have arguments of VENV, FENV, BENV, GENV just like %EVAL does
and that it should use these where it can. Only %EVAL (and *APPLY, which
would be like the seemingly redundant *EVAL, I suspect) would be able to
pass non-null args to %APPLY. The normal case of APPLY would call %APPLY
with env args of NIL NIL NIL NIL. I realize that conceptually %APPLY should
be independent of understanding of the environment structure in a lexical
lisp, but that will kill error-recovery and I think it's worth considering
that in this case.

This is somewhat redundant with what I just said, but I have it listed
twice in my notes, so...
In %APPLY, neither a lexical name nor a lexical lambda are valid things to
return from the CERROR. This is reasonable for the case where one did
(APPLY fn ...) since presumably fn comes from another context, but it is
not reasonable when %EVAL is calling %APPLY as a subroutine.

The phrase "or lambda-list" in %LAMBDA-APPLY-1's error message
 "improper lambda or lambda-list"
is spurious. "improper lambda" would be sufficient.

In %BIND-KEY, you don't do anything useful with the return value from the
CERROR that complains about unexpected keys. Probably you should do something
in the way of allowing the user to return a new keyword and having it 
processed since it's going to be more likely that the user mis-spelled a 
keyword and wants it used than that he supplied an extra keyword that he
wanted ignored. Also, this CERROR doesn't provide much contextual information
to the user when it has a whole lot available that it could provide. I would
recommend making the message more informative.

Your definition of %BIND-AUX disallows multiple &AUX's in the lambda list.
Maclisp explicitly allows this to allow macros to have an easier time consing
up functions. They are always allowed to append '(&AUX ...) to the end of a
bvl without looking to see if there was already an &AUX in that bvl. I don't
care if this feature exists in Common-Lisp or not, I'm just pointing out
that you're taking a stand on the issue with your implementation.

!
Modularity issues
-----------------

I'm not clear on how global special forms and macros are stored.
But in the thing which does the fsymeval on the car of a list in %EVAL 
(near lines 35-45), it seems like if all these things
store their information in the global function cell, then it would
seem that you could change this to do
 (WHEN (BOUNDP FN)
   (RETURN (LET ((SYM-FN (SYMBOL-FUNCTION FN)))
	     (COND ((SPEC-FORM-OBJ? SYM-FN) ...)
		   ((MACRO-OBJ? SYM-FN) ...)
		   (T ...)))))
since it looks now like SPECIAL-FORM-P and MACRO-P are doing implicit 
references to that global cell and you might as well cache the value. Is
this so? 

I think much of the code in %EVAL for evaluating symbols is duplicated in
the special form definition for FUNCTION later on. I suggest that these two
should call a common-subroutine to take care of the functionality they share
since otherwise you run a greater risk of having these two highly complex 
specifications accidentally diverge as modifications get made to this 
interpreter design.

In %BIND-VAR, you do
 `(LET ((VAR ,VAR) (VALUE ,VALUE)) (LET ((SPECP ...)) ...))
You'd be safer not making the assumption that VAR and VALUE don't occur 
in BODY by doing:
 (LET ((VAR-VAR (GENSYM)) (VALUE-VAR (GENSYM)))
   `(LET ((,VAR-VAR ,VAR) (,VALUE-VAR ,VAR))
      (LET ((SPECP ...)) ...)))
or in LispM lisp, people would say you should do:
 (ONCE-ONLY (VAR VAL)
   `(LET ((SPECP ...))
      ...))

I am uncomfortable philosophically with the fact that you have to look for
'&KEY in the middle of %TRY-REST, but I have no better suggestion for control
structure right now.

!
Inconsistencies
---------------

Near line 6 of %EVAL, you define symbol evaluation. This evaluation does
not special-case NIL or T. This is acceptable if you mean it to be the case
that these variables may be bound. However, if you mean to allow NIL to be
bound, be warned that there are implicit assumptions all through your code
(eg, when you blur the distinction between (VAR INIT) and (VAR INIT VARP)
by just always taking (CADDR VARSPEC) without looking to see if VARP is there)
that hint that NIL cannot be bound. If it could be bound, then
(X 3 NIL) would want to bind NIL to T when X was supplied, while (X 3) would
not want to bind NIL. Basically, I think you're well-off to special case NIL
everywhere. I note that in %PROCESS-AUX, you special-case NIL's evaluation
by doing (AND INIT (%EVAL INIT ...)). No matter what is decided, you should
definitely make this consistent with what %EVAL does.

In the next to last line of FUNCTION, you return fn from (FUNCTION ...)
without type-checking it. This allows someone to return the number 3 from
the CERROR just prior and have FUNCTION return the 3. This goes along with
the modularity issue I bring up elsewhere about how FUNCTION and %EVAL are
duplicating effort, but I think FUNCTION should definitely recurse on the
result of this CERROR to be sure the guy gave back something valid. Also, 
for example, if he gives back a lambda expression from the CERROR, it's going
to be a global lambda, not a lexical lambda.

!
Things I didn't understand
--------------------------

The %BIND-xxx and %PROCESS-xxx functions all pass around an arg called
OLDVENV which is never used. I couldn't figure out what this arg might want
to be used for. It seems wasted.

!
Efficiency Considerations
-------------------------

%BIND-VAR calls MEMBER when it probably wants MEMQ.  %EVAL calls ASSOC
twice where it probably wants ASSQ. So do FUNCTIOn, RETURN, RETURN-FROM,
RESTART (now called RESTART-FROM), RESTART-FROM (obsolete), and GO. These
are in fact not just efficiency considerations, in fact, since they have
certain semantic implications as well if you consider what would happen
if people put flonums, arrays, lists, etc. in some of these places.

The only use you ever make of PROGV is with 1 or no vars. It would save a lot
of consing if there was a PROGV1 which took either a var or NIL in the first
position and a val for the second position. PROGV could be implemented 
recursively using PROGV1.

They KEYS arg carried around by %BIND-KEY and %PROCESS-KEY and stuff is used
for nothing more than error checking and costs a lot of consing. I just want
to point out that this error checking comes at a price in case anyone cares.

Calling INTERN in realtime when doing LAMBDA applications that involve
keys is awfully expensive. It might be worth it to have DEFUN do a preparse
of the lambda list and expand (.. &KEY X ...) and (.. &KEY (X ...) ...)
to (... &KEY ((:X X) ...) ...) to save runtime later. This wouldn't help
when a person did (APPLY #'(LAMBDA (&KEY X ...) ...) ...), but that's probably
rare enough not to be an efficiency issue.

!
Special declarations
--------------------

The special variable SPECIALS has no associated DEFVAR.

The special variable %GENV% has no associated DEFVAR. Note that the %...%
notation is not necessary since presumably this definition will be interpreted
on an SI package, where all special variable names will not be user-accessible 
anyway.

There may be others. These were just the ones I happened to spot.

!
Implications
------------

You have numbers, strings, and characters self-evaling. When Scott's array
proposal is finalized out, we should doublecheck that this is still what
everyone intends.

The expression:
    (MACROLET ((F ...))
      (FLET ((F ...))
	(F 3)))
means that (F 3) calls the FLET'd F, not the MACROLET'd F according
to my understanding of your interpreter. Is this what you meant and
will the compiler do a consistent thing? This should be made explicit
in the language spec.

I'm not sure if I agree with doing the
    (CONS FN (CDR EXP))
near line 28 of %EVAL. It seems to me that this thwarts the accuracy of
displacing macros to no good end. eg, consider the macro:
  (DEFMACRO F (X)
    (SETF (CAR X) 'G)
    (SETF (CADR X) `',(CADR X))
    X)
This macro will lose if it doesn't get the true cons because it will change
only the CADR and not the car. Then the next time, it'll layer another level
of (QUOTE ..) around X's CADR and G will get a wrong type arg. This has
happened to me sometimes and irritates me greatly. Also, I see little value to
putting 'F back on because I could have written
  (MACRO F (X) (FROB (CONS 'F (CDR X))))
if I cared anyway. I've almost never even wanted to write 
  (MACRO F (X) (CASEQ (CAR X) ...))
and always in that case I expect to lose on such a macro if someone calls it
with a bad CAR. Does Common-Lisp really take a stand on this issue or did
you just make an arbitrary decision. If it's just your decision, I urge you
to quietly reverse it.

I was slightly confused initially by the WITH-CLOSURE-BINDINGS-IN-EFFECT
special form which is used in %APPLY. I assume this does an UNWIND-PROTECT
of assigning and de-assigning the special cells or some such thing? I'm not
really familiar with the decisions about what "closures over dynamic 
variables" are to mean in Common-Lisp, so this guess of mine is based purely 
on looking at how variables were bound and referenced in the rest of the code.

%LAMBDA-APPLY is assuming that a CATCH can get back two values from a THROW.
The LispM currently does not support this. Did they agree to this in 
Common-Lisp? They already have CATCH return extra values saying if a throw 
was done and if so to what tag and things like that. Further, they explicitly
specify that multiple-values cannot pass through some forms like CATCH due to
the funniness that wants to get returned from CATCH. Hence, even
(CATCH FOO (VALUES 1 2)) is likely to return 1, NIL, NIL, NIL or some such
thing from LispM Lisp.

In %LAMBDA-APPLY-RETRY, you have THROW being a special form. In most
implementations, it is a subr. Doing 
	(THROW '%LAMBDA-APPLY-RETRY (VALUES FN ARGS))
if THROW is a subr will make THROW receive only FN and not ARGS as a value,
so that's all that'll get thrown. At the very least, you want to make
THROW take multiple args to throw to the associated CATCH as in
	(THROW '%LAMBDA-APPLY-RETRY FN ARGS),
and even this assumes that your catch can catch multiple values, which
I suspect the LispMachine people haven't agreed to. (See note on
CATCH in previous paragraph).

In %LAMBDA-APPLY-1, your LAMBDA's allow multiple DECLAREs as in
 (LAMBDA (X Y) (DECLARE (SPECIAL X)) (DECLARE (SPECIAL Y)) ...).
I think this is reasonable, but it's not something I expected to find.
If LAMBDA will hack this, then it should be advertised since user macros
must look for more than one form being a potential DECLARE.

In %LAMBDA-APPLY-1, it is also the case that you do not do a MACROEXPAND of 
any sort before checking for DECLAREness of the leading forms in the body.
I think it would be good if you did, but I'm not going to push the issue.
I just wanted to point out that your interpreter has this property.

As EAK has pointed out already, %LAMBDA-APPLY-1 ignores all declarations
but SPECIAL. It might be worthwhile to think out how this information would
be passed around even though there are no primitives currently defined 
internally or externally in the interpreter which make use of the info.
Such figuring out would be interesting just to understand what functions need
to know about such information and what don't, etc.

I don't have my CL manual here as I write this or I'd check it, but the
valid syntaxes which &BIND-KEY appears to recognize are: sym, (sym),
(sym init), (sym init symp), ((key sym)) ((key sym) init), and 
((key sym) init symp). I presume that's what you intended.

!
Feature Suggestions
-------------------

It would be very useful if Common-Lisp interpreters were encouraged to do
a check in %BIND-REQUIRED just before the T clause of the toplevel COND
checking to see if things in the bvl are on the keyword package. That would
keep people from writing (LET ((:X ...)) ...) which would be disastrous.
It would also perhaps catch some novices who might confuse &keys and :keys
and do (LAMBDA (X :OPTIONAL Y) ...). This same point could check for binding
T and NIL, which would be quite useful.

!
Style Notes
-----------

The following assumes you like to write code like I do which distinguishes
false from the empty list even tho' they are obviously EQ objects by 
coincidence:
I disagree with your use of (NOT (NULL ...)) in a few places. eg, in 
%EVAL, I'd write (IF EVALHOOK ...), not (IF (NOT (NULL EVALHOOK)) ...)
since EVALHOOK is either a function or false, not a function or the 
empty list. In the same function you also write
	(AND (NOT (NULL SLOT)) (NOT (NULL (CDR SLOT))))
where I would write
	(AND SLOT (NOT (NULL (CDR SLOT))))
since in fact the value returned by ASSOC is false, not the empty list,
in the failing case.

I do not like the name %BIND-xxx, %PROCESS-xxx etc. because they do not 
imply that they will also process the body once the bindings are done.

The name %BIND-VAR is even more confusing named because it looks so much
like %BIND-REQUIRED and friends, but it is a special form, not a function.

Rather than write (EQ (CADR SLOT) 'INVALID) and (RPLACA (CDR SLOT) 'INVALID),
I would write macros VALID-SLOT-P and INVALIDATE-SLOT which abstract this
functionality.


!
Other Notes
-----------

The definition of BLOCK is meta-circular since it calls LOOP, which presumably
calls BLOCK. I'm sure there are several others. The entire thing is sort 
of implicitly meta-circular in that it presupposes an evaluator, but this is 
one of the few things that actually calls a function you bother to define.
Probably this doesn't matter at all, I just thought I'd mention it.

You glossed the detail of how closures, macros, etc. are represented. This
is apparent in uses of MACRO-P, SPECIAL-FORM-P, CLOSURE-FUNCTION, etc. I note
that these do not have %'s. I assume that was intentional (ie, that you mean
them to be user functions)?

MAKE-INTERPRETED-CLOSURE (called from FUNCTION) has no % either. Is that a
user-accessible function?

I was amused by the definition of APPLY. I have seen that trick for consing
args before, so wasn't utterly thrown, but it's a pretty odd-looking piece
of bummed code nevertheless. Probably some comments out in the right margin
wouldn't hurt too much for those who don't understand what it's up to. I had
to double check it about 4 times to assure myself that what looked reasonable
was indeed reasonable. (The alternative view says that maybe since it's so 
hairy it's best not to delude people by writing comments; they should have to
wade through it to see what it's really doing. I dunno.)

The code for %LAMBDA-APPLY is twice as amusing as that in APPLY. Boy is that
random control-structure.

The T clause of the %BIND-REQUIRED definition wasn't indented right in the
copy you sent out. This caused me some visual confusion because I thought
the parens might be misbalanced. I checked and they're ok.
-------

∂28-Sep-82  1753	Scott E. Fahlman <Fahlman at Cmu-20c> 	Arrays and vectors (again)
Date: Tuesday, 28 September 1982  20:53-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   DLW at MIT-MC
Cc:   common-lisp at SU-AI
Subject: Arrays and vectors (again)


    Well, the "RPG memorial" as it stands says that only strings can
    be used as inputs to the special "string-specific" functions, but
    your reply to me says that actually even non-simple char-arrays
    can be used as inputs to the string functions.

The following text is lifted verbatim from the RPG memorial proposal:

"A STRING is a VECTOR whose element-type (specified by the :ELEMENT-TYPE
keyword) is STRING-CHAR.  Strings are special in that they print using
the "..." syntax, and they are legal inputs to a class of "string
functions".  Actually, these functions accept any 1-D array whose
element type is STRING-CHAR.  This more general class is called a
CHAR-SEQUENCE."

Looks to me like I am saying that STRING-mumble accepts true strings and
also the more general CHAR-SEQUENCES.  At least, that was what I was
trying to say.  It just seemed too ugly to rename these functions
CHAR-SEQUENCE-mumble.  I admit that the naming is confusing here, since
some objects other than stings are accepted by these functions, but this
seems no worse to me than Zetalisp's decision to let the string
functions accept symbols as well.  The idea is that these are functions
that mostly work on strings and, as a special favor, they will also
swallow arbitrary 1-D arrays of characters.

    Also, what does stringp do?  Does it ever return t for anything with a fill
    pointer?  If so, then stringp returns t for things that are not strings,
    which seems unacceptable; if not, then stringp will return NIL for some
    things that print exactly the same way that strings print, and otherwise
    behave very similarly, which seems undesirable.

In an implementation that distinguishes strings from char-sequences,
STRINGP would return T for true (simple) strings and NIL for everything
else.  If you want to ask if something is a CHAR-SEQUENCE (which
includes strings as a subtype), you use CHAR-SEQUENCEP.  It is true that
non-string char-sequences print as if they were strings, but I don't see
the problem with that, unless the association between STRINGP and the "..."
syntax has become sacred.

-- Scott

∂29-Sep-82  0515	Ginder at CMU-20C 	Re: Arrays and vectors (again) 
Date: 29 Sep 1982 0815-EDT
From: Ginder at CMU-20C
Subject: Re: Arrays and vectors (again)
To: Fahlman at CMU-20C
cc: common-lisp at SU-AI
In-Reply-To: Your message of 28-Sep-82 2103-EDT


Does the notion of char-sequence include lists of characters?  If it does not,
then using char-SEQUENCE to denote it will be confusing to new users.  If we
go the whole route and fully generalize the notion of char-sequence, someone
will probably propose that we include the notion of bit-sequence.  I don't know
if this is reasonable or not.

-Joe
-------

∂29-Sep-82  0635	Scott E. Fahlman <Fahlman at Cmu-20c> 	Arrays and vectors (again)
Date: Wednesday, 29 September 1982  09:35-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Ginder at CMU-20C
Cc:   common-lisp at SU-AI
Subject: Arrays and vectors (again)


Sigh!  Well, I suppose we could call these things
GENERAL-CHAR-1-D-ARRAYS, but that's pretty awful.  I must say that on
the grounds of cleanliness of nomenclature, the simple-switch proposal
seems to dominate the RPG memorial.

-- Scott

∂29-Sep-82  0654	Scott E. Fahlman <Fahlman at Cmu-20c> 	Arrays and Vectors   
Date: Wednesday, 29 September 1982  09:54-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: Arrays and Vectors


Guy will be preparing a ballot on the recent flurry of issues pretty
soon, but I would like to jump the gun on one of them.  I would really
like us to come to some sort of conclusion on the array/vector business.
Because this is a large issue and is pervasive, a lot of coding is being
delayed until this gets settled.  It would be very useful to see if
there is anything like a consensus out there.  If so, we can wrap this
up; if not, we may need another face-to-face meeting to bash out the
details in finite time -- I really don't want this to hang for another
month or two.

It seems to me that the live options are

1. Simple-switch.
2. RPG memorial.
3. Neither of the above.

Please let me know which of these options you prefer and which of the
others you would be willing to live with.  If you vote for 3, please
include a coherent counter-proposal, or at least a clear indication of
what you are unhappy about.

As for my vote, I could live with either 1 or 2.  I have a slight
preference for 1 (surprise!) because the nomenclature seems much less
confusing -- I think the added clarity outweighs the disadvantage of
having to write "simple" in some declarations.

-- Scott

∂29-Sep-82  0707	Scott E. Fahlman <Fahlman at Cmu-20c> 	Arrays and Vectors   
Date: Wednesday, 29 September 1982  09:54-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: Arrays and Vectors


Guy will be preparing a ballot on the recent flurry of issues pretty
soon, but I would like to jump the gun on one of them.  I would really
like us to come to some sort of conclusion on the array/vector business.
Because this is a large issue and is pervasive, a lot of coding is being
delayed until this gets settled.  It would be very useful to see if
there is anything like a consensus out there.  If so, we can wrap this
up; if not, we may need another face-to-face meeting to bash out the
details in finite time -- I really don't want this to hang for another
month or two.

It seems to me that the live options are

1. Simple-switch.
2. RPG memorial.
3. Neither of the above.

Please let me know which of these options you prefer and which of the
others you would be willing to live with.  If you vote for 3, please
include a coherent counter-proposal, or at least a clear indication of
what you are unhappy about.

As for my vote, I could live with either 1 or 2.  I have a slight
preference for 1 (surprise!) because the nomenclature seems much less
confusing -- I think the added clarity outweighs the disadvantage of
having to write "simple" in some declarations.

-- Scott

∂29-Sep-82  0825	Ginder at CMU-20C 	Re: Arrays and vectors (again) 
Date: 29 Sep 1982 1126-EDT
From: Ginder at CMU-20C
Subject: Re: Arrays and vectors (again)
To: Fahlman at CMU-20C
cc: common-lisp at SU-AI
In-Reply-To: Your message of 29-Sep-82 0940-EDT

Perhaps we need devote no name to the notion denoted as char(bit)-sequence in
the RPG memorial proposal.  Wouldn't it just be OK to say that "string
functions" accept those things that satisfy :

	(typep THING '(array string-char 1))

People would probably refer to these things as "1 dimensional character
arrays" or something when talking about them, but it's not clear to me
that we need devote a special type specifier to them.  If naming is a
problem, then maybe the simple-switch version is the win.
-Joe
-------

∂29-Sep-82  0940	HEDRICK at RUTGERS (Mgr DEC-20s/Dir LCSR Comp Facility) 	Re: Arrays and Vectors 
Date: 29 Sep 1982 1241-EDT
From: HEDRICK at RUTGERS (Mgr DEC-20s/Dir LCSR Comp Facility)
Subject: Re: Arrays and Vectors
To: Fahlman at CMU-20C
cc: common-lisp at SU-AI
In-Reply-To: Your message of 29-Sep-82 1019-EDT

My vote on this, as most other issues, is that I would like Guy
to design the language, keeping in mind the goals that it should
be as compatible with Maclisp and as simple as possible.  I am
very concerned about the results of having a language designed by
a committee.  I guess I am saying that I am giving Guy my proxy.
-------

∂29-Sep-82  0956	Guy.Steele at CMU-10A 	Design of Common LISP 
Date: 29 September 1982 1254-EDT (Wednesday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject: Design of Common LISP

I am very grateful to Hedrick for his vote of confidence, but I must
point out that I am no less fallible than anyone else on the committee.
Two items that come to mind are the useless extensions to GCD and to
the ENDP predicate, lapses of taste I am glad have been corrected
through committee interaction.  This is, however, why we prefer
consensus to voting; presumably a unified committee will produce a more
or less unified and consistent design, while a design whose different
aspects are supported by different 51%-subsets of the committee is less
likely to be consistent.
--Guy


∂29-Sep-82  1127	RPG  	Proposals
To:   common-lisp at SU-AI  
Both the Simple Switch and ``RPG Memorial'' array proposals are
on the file <SAIL>ARRAY[COM,LSP], which can be FTPed away without
login. 
			-rpg-

∂29-Sep-82  1328	Scott E. Fahlman <Fahlman at Cmu-20c> 	Arrays and Vectors   
Date: Wednesday, 29 September 1982  09:54-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: Arrays and Vectors


Guy will be preparing a ballot on the recent flurry of issues pretty
soon, but I would like to jump the gun on one of them.  I would really
like us to come to some sort of conclusion on the array/vector business.
Because this is a large issue and is pervasive, a lot of coding is being
delayed until this gets settled.  It would be very useful to see if
there is anything like a consensus out there.  If so, we can wrap this
up; if not, we may need another face-to-face meeting to bash out the
details in finite time -- I really don't want this to hang for another
month or two.

It seems to me that the live options are

1. Simple-switch.
2. RPG memorial.
3. Neither of the above.

Please let me know which of these options you prefer and which of the
others you would be willing to live with.  If you vote for 3, please
include a coherent counter-proposal, or at least a clear indication of
what you are unhappy about.

As for my vote, I could live with either 1 or 2.  I have a slight
preference for 1 (surprise!) because the nomenclature seems much less
confusing -- I think the added clarity outweighs the disadvantage of
having to write "simple" in some declarations.

-- Scott

∂29-Sep-82  1321	Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC> 	Re: Arrays and vectors (again) 
Date: Wednesday, 29 September 1982, 16:09-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
Subject: Re: Arrays and vectors (again)
To: Ginder at CMU-20C
Cc: common-lisp at SU-AI
In-reply-to: The message of 29 Sep 82 11:26-EDT from Ginder at CMU-20C

Good try but I really don't think it works.  (typep thing '(array
string-char 1)) is just too verbose, especially for programs that use
strings in any heavy kind of way.  I have been using a Lisp with real
strings for many years now, and STRINGP is a function that gets
reasonably heavy use.

Scott, I apologize for not reading your proposal carefully enough.  It
does indeed answer all my questions.

Now that I understand what's going on, I still don't like it.  The problem
is just that the naming is too complex and inconsistent.  I think you
and I agree on this point, at least in sign if not in magnitude.

One clear problem is that the criterion for acceptability to STRING-
functions is not the same as STRINGP-, and so it's not really clear what
STRING means.  A more serious (but more debatable, I guess) problem is
just a conceptual one for me; I think of a string with a leader as being
a string, not a one-d-char-array, and that's why it prints out with
double quotes instead of #<array ...>.  I just have a feeling for what a
string is, and I'm quite certain that even a string with "hairy"
features like having an array-leader is still a string.

So (no surprise) I am in favor of the "simple-switch" proposal, with
about three exclamation points (in the November terminology).

∂29-Sep-82  1721	Alan Bawden <ALAN at MIT-MC> 	What is this RESTART kludge?  
Date: 29 September 1982 19:06-EDT
From: Alan Bawden <ALAN at MIT-MC>
Subject:  What is this RESTART kludge?
To: Common-Lisp at SU-AI

    Date: Monday, 27 September 1982  23:14-EDT
    From: Scott E. Fahlman <Fahlman at Cmu-20c>

    I agree with Alan Bawden that RESTART should not have a value-returning
    subform, and that RESTART-FROM is silly.  It is probably also OK to
    flush (RESTART NIL) and require a non-null block-name.

Slight misunderstanding here.  I was not proposing that "(RESTART NIL)" be
disallowed, only that "(RESTART)" be disallowed.  But I won't object if you
really want to go this far, as long as we at least take the step of requiring a
block name.  

I guess I am convinced that having RESTART built into the language is a good
idea, if only so that DEFUN can produce one.

I still think that using BLOCK as the thing to restart is a poor idea, and I
wish someone would take my idea of having a separate RESTARTABLE form and a
separate namespace of restart tags seriously (option #2 in my last message).
(DEFUN can produce BOTH so that you can both RETURN-FROM and RESTART any
function.)  I really think is is a bad mistake to build an implicit loop into
every BLOCK.

Let me raise a related issue.  Since (DEFUN FOO ...) includes an implicit
restart block (implemented in either way).  Where does that restart block lie
with respect to the code in &OPTIONAL and &AUX variables?

    By the way, are we converging toward the name TAGBODY?  I much prefer
    PROGBODY for this use.

I agree with GLS that TAGBODY is a better name.  PROG has nothing to do with
it.

∂29-Sep-82  1726	Brian G. Milnes <Milnes at CMU-20C> 	Issue 82 of the last CL meeting  
Date: Wednesday, 29 September 1982  19:01-EDT
From: Brian G. Milnes <Milnes at CMU-20C>
To:   common-lisp at SU-AI
Subject: Issue 82 of the last CL meeting


	Issue 82 of the last common lisp committe meeting states that :
one argument float should always return a single-float. 

	This is not orthogonal with the rest of the numeric functions, because
although the default float result type is that of the most precise argument
overflows from one float format roll onto the float format of the next greater
precision.

	I am not sure if this is what is actually intended by the CL manual,
but it is the way SpiceLisp has been implemented. Perhaps it would be nice if
on the next pass over the numeric chapter, Guy would specify exactly how
overflow is handled, or say that it is implementation specific. 

	If the user wants float to return only single-float, and cause an error
if the argument will not fit can't he simple use (float x 0.0s0) ? But, if the
user wants to return any convenient float, but not overflow or get bogged down
in a high precision shouldn't he be able to do this with single argument float
? 

	- Brian G. Milnes (SpiceLisp NumberHacker)

∂29-Sep-82  1753	Scott E. Fahlman <Fahlman at Cmu-20c> 	What is this RESTART kludge?   
Date: Wednesday, 29 September 1982  20:53-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Alan Bawden <ALAN at MIT-MC>
Cc:   Common-Lisp at SU-AI
Subject: What is this RESTART kludge?


Fine, let's require RESTART to take a block name, but not outlaw
(RESTART NIL).  I'm not sure I'd ever use (RESTART NIL), but if it is
not really screwing anyone, it's less confusing to leave it in.

    Let me raise a related issue.  Since (DEFUN FOO ...) includes an implicit
    restart block (implemented in either way).  Where does that restart
    block lie with respect to the code in &OPTIONAL and &AUX variables?

For use in error recovery, it is important to have the implicit BLOCK
surround the body, but NOT the variable initializations.  Re-doing the
inits is hardly ever what you want to do, not to mention the fact that
this would tend to clobber any repairs the user has made to variable
values.

    I agree with GLS that TAGBODY is a better name.  PROG has nothing to
    do with it.

Well, it's the body of a PROG (or PROG-like construct), not the body of
a tag.  All sorts of things are documented as having a body like a prog,
so this seems a natural way to describe what's going on at least to
those who already know Lisp.  Why pick on tags?  It's also a RETURN-BODY
and a SYMBOLS-ENCOUNTERED-AT-TOP-LEVEL-ARE-NOT-EVALUATED-BODY.

-- Scott

∂29-Sep-82  1946	Kent M. Pitman <KMP at MIT-MC>
Date: 29 September 1982 22:45-EDT
From: Kent M. Pitman <KMP at MIT-MC>
To: Fahlman at CMU-10A
cc: Common-Lisp at SU-AI

Your point that PROGBODYs are also RETURN-BODYs is what makes the
strongest argument for TAGBODY. The name PROGBODY is most suggestive
of "having the functionality one expects in a PROG's body" which includes
the ability to RETURN. ie, that (PROG (...) . body) might mean
(LET (...) (PROGBODY ...)) which we obviously don't intend. Hence, any
name not including the name PROG would be better because it would not
suggest functionality we don't intend it to provide. Hence, I think the
name TAGBODY is fine.

∂29-Sep-82  1955	Skef Wholey <Wholey at CMU-20C> 	MAKE as a new name for SETF (gasp!)  
Date: Wednesday, 29 September 1982  22:50-EDT
From: Skef Wholey <Wholey at CMU-20C>
To:   Common-Lisp at SU-AI
Subject: MAKE as a new name for SETF (gasp!)

Although there was already a round of mail on the subject of changing the name
of SETF (to SET), I am re-suggesting that the name of SETF be changed.  The
two arguments against the name change were
	1) The time-honored SET would be clobbered, and
	2) A name change at this point would be absurd, since
	   there are many other badly-named things in the
	   language.
To the first, I suggest a new name, MAKE.  This not only enchances the clarity
of code (e.g. (MAKE (SPACE-SHIP-SPEED ENTERPRISE) 'WARP-7)), but frees us from
having to explain yet another bad choice of names to the next generation of
LISPers.  This brings us to the second point: that of changing a name at this
stage of the game.  Because this point has received so much discussion, and
because SETF will now appear so prominently in every program, it is clear that
this particular name change deserves some thought.  I urge you to think one
more time before closing the issue forever.

--Skef

∂29-Sep-82  2036	Scott E. Fahlman <Fahlman at Cmu-20c>   
Date: Wednesday, 29 September 1982  23:36-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Kent M. Pitman <KMP at MIT-MC>
Cc:   Common-Lisp at SU-AI


KMP is right.  I was thinking of the new "progbody" form as the thing
that implements the entire body of a PROG, but I went back to the
original proposal and realized that the body of a PROG is really this
new widget surrounded by a couple of blocks.  So PROGBODY would be a
misleading name, and TAGBODY or nearly anything else would be better.  I
still think TAGBODY is somewhat grotesque, but I don't have a better
suggestion right now.

-- Scott

∂29-Sep-82  2104	Kent M. Pitman <KMP at MIT-MC>
Date: 30 September 1982 00:01-EDT
From: Kent M. Pitman <KMP at MIT-MC>
To: Wholey at CMU-20C
cc: Common-Lisp at SU-AI

I don't like the name MAKE because it suggests a constructor, not a
mutator. eg, I would expect MAKE-PAIR to mean CONS, not DISPLACE.

For the sake of those at CMU trying to get the first system out the
door, I would suggest that we not spend a lot of time on non-critical
naming issues until we get some of the more major issues worked out.
The change you suggest is an upward compatible one which can safely
be discussed later. The existence of the name SETF will not significantly
impair existing programming since the operator's functionality is at
least stable. At an appropriate later time, it would probably be worth
reviving this issue as in the long run, I too feel that SETF is not
desirable.
-kmp

∂29-Sep-82  2107	Scott E. Fahlman <Fahlman at Cmu-20c> 	MAKE as a new name for SETF (gasp!) 
Date: Wednesday, 29 September 1982  23:07-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Skef Wholey <Wholey at CMU-20C>
Cc:   Common-Lisp at SU-AI
Subject: MAKE as a new name for SETF (gasp!)


Gasp, indeed.  MAKE is used throughout Common Lisp as a prefix
indicating the creation (allocation) of a new data structure.  To use
MAKE for an operation that alters a slot in an existing data structure
would be terribly confusing.  I like this even less than changing SETF
to SET.

-- Scott

∂29-Sep-82  2330	Alan Bawden <ALAN at MIT-MC> 	What is this RESTART kludge?  
Date: 30 September 1982 02:23-EDT
From: Alan Bawden <ALAN at MIT-MC>
Subject:  What is this RESTART kludge?
To: Fahlman at CMU-20C
cc: Common-Lisp at SU-AI

    Date: Wednesday, 29 September 1982  20:53-EDT
    From: Scott E. Fahlman <Fahlman at Cmu-20c>

    For use in error recovery, it is important to have the implicit BLOCK
    surround the body, but NOT the variable initializations.  Re-doing the
    inits is hardly ever what you want to do, not to mention the fact that
    this would tend to clobber any repairs the user has made to variable
    values.

Can you provide examples to back up the claim that "re-doing the inits is
hardly ever what you want to do"?  Suppose:

(defun foo (l &optional (y (1+ (car l))))
  ...
	(when (oddp (car l))
	  (cerror :bad-argument "Odd car: ~S" l)
	  (restart foo))
  ...)

Can you really argue that it is wrong to recompute the value of Y in GENERAL if
it wasn't given by the caller and the list L has been given a new car?
(Granted you can probably construct an INSTANCE in which it is wrong to
recompute the value of Y.)

A good reason to make the restart redo the inits is that it would otherwise be
impossible for the programmer to ask for that behavior, whereas if he doesn't
want them redone he can always explicitly add a restart block around the body
of his defun.  Given that in general you are going to have to think about this
problem when you write a RESTART, I think it best to give the option of having
it either way by having the built in restart do something that would be
impossible to do otherwise.

[Actually I think this is also a fairly good argument against the whole restart
 kludge.  No matter what we do here people are going to be fooled into
 believing that RESTART is something that it isn't.]


∂29-Sep-82  2349	MOON at SCRC-TENEX 	Issue 82 of the last CL meeting    
Date: Thursday, 30 September 1982  00:35-EDT
From: MOON at SCRC-TENEX
To:   common-lisp at SU-AI
Subject: Issue 82 of the last CL meeting
In-reply-to: The message of 29 Sep 1982  19:01-EDT from Brian G. Milnes <Milnes at CMU-20C>

    Date: Wednesday, 29 September 1982  19:01-EDT
    From: Brian G. Milnes <Milnes at CMU-20C>

    	Issue 82 of the last common lisp committe meeting states that :
    one argument float should always return a single-float. 
This was to fix the previous travesty, where FLOAT could return a number of
smaller precision than expected, if you happened to hand it an integer with
only a few bits on (such as a power of two), since it would fit exactly.  Then
if you took the square root of that (for example), you would get a result
with less precision than you expected.

    	This is not orthogonal with the rest of the numeric functions, because
    although the default float result type is that of the most precise argument
    overflows from one float format roll onto the float format of the next greater
    precision.
This is not the way I read the bottom paragraph on page 117 of the 29July Colander
edition, which is the only thing I can find in a quick search that might be
relevant to this.

I think it is probably best for the system to do as few behind your back
"smart", "helpful" tricks in the area of floating point numbers as possible.
Certainly the 2 or 3 books on the subject I have seen are against this sort
of thing.  On the other hand, the proposed IEEE standard has some hand waving
in it that seems to be aimed at providing for switching to an alternate number
representation when overflow occurs.

I don't have an expert opinion about this, but my uninformed opinion is
that it would be better to signal exponent overflow than to switch to a
bigger number representation.  I -am- certain that it is better to signal
inexact result than to switch to a bigger number representation (in the
proposed IEEE standard, inexact result is the usually-disabled exception
that is signalled when low-order bits of the fraction are lost.)

    	If the user wants float to return only single-float, and cause an error
    if the argument will not fit can't he simple use (float x 0.0s0) ? But, if the
    user wants to return any convenient float, but not overflow or get bogged down
    in a high precision shouldn't he be able to do this with single argument float
This is a tough one.  Certainly it sounds plausible that FLOAT should work
rather than giving you an error.  On the other hand, in many implementations
the floating-point number format with the biggest precision and range is
substantially more expensive than the "standard one", so it may not be a good
idea to use it unexpectedly, and it certainly is not a good idea to make FLOAT
use it all the time.

Whatever FLOAT does, READ should do the same thing for floating-point number
syntaxes that don't explicitly specify a float type (e.g. ones with no exponent
or an E exponent).

If I have to vote, I will vote for leaving FLOAT the way it is: i.e. it always
returns "single" format.  By the way, 0.0s0 is "short" format, not "single" format.
See page 18 of the colander.

∂29-Sep-82  2349	MOON at SCRC-TENEX 	arrays and vectors  (long carefully-thought-out message)    
Date: Thursday, 30 September 1982  01:59-EDT
From: MOON at SCRC-TENEX
To:   common-lisp at sail
Subject: arrays and vectors  (long carefully-thought-out message)

I prefer the "simple switch" to the "RPG memorial" proposal, with one
modification to be found below.  The reason for this preference is that
it makes the "good" name, STRING for example, refer to the general class
of objects, relegating the efficiency decision to a modifier ("simple").
The alternative makes the efficiency issue too visible to the casual user,
in my opinion.  You have to always be thinking "do I only want this to
work for efficient strings, which are called strings, or should it work
for all kinds of strings, which are called arrays of characters?".
Better to say, "well this works for strings, and hmm, is it worth
restricting it to simple-strings to squeeze out maximal efficiency"?

Lest this seem like I am trying to sabotage the efficiency of Lisp
implementations that are stuck with "stock" hardware, consider the
following:

In the simple switch proposal, how is (MAKE-ARRAY 100) different from
(MAKE-ARRAY 100 :SIMPLE T)?  In fact, there is only one difference--it is
an error to use ADJUST-ARRAY-SIZE on the latter array, but not on the
former.  Except for this, simpleness consists, simply, of the absence of
options.  This suggests to me that the :SIMPLE option be flushed, and
instead a :ADJUSTABLE-SIZE option be added (see, I pronounce the colons).
Even on the Lisp machine, where :ADJUSTABLE-SIZE makes no difference, I
think it would be an improvement, merely for documentation purposes.  Now
everything makes sense: if you don't ask for any special features in your
arrays, you get simple ones, which is consistent with the behavior of the
sequence functions returning simple arrays always.  And if some
implementation decides they need the sequence functions to return
non-simple arrays, they can always add additional keywords to them to so
specify.  The only time you need to know about the word "simple" at all is
if you are making type declarations for efficiency, in which case you have
to decide whether to declare something to be a STRING or a SIMPLE-STRING.
And it makes sense that the more restrictive declaration be a longer word.
This also meets RPG's objection, which I think boils down to the fact
that he thought it was stupid to have :SIMPLE T all over his programs.
He was right.

I'm fairly sure that I don't understand the portability issues that KMP
brought up (I don't have a whole lot of time to devote to this).  But I
think that in my proposal STRINGP and SIMPLE-STRINGP are never the same
in any implementation; for instance, in the Lisp machine STRINGP is true
of all strings, while SIMPLE-STRINGP is only true of those that do not
have fill-pointers.  If we want to legislate that the :ADJUSTABLE-SIZE
option is guaranteed to turn off SIMPLE-STRINGP, I expect I can dig up
a bit somewhere to remember the value of the option.  This would in fact
mean that simple-ness is a completely implementation-independent concept,
and the only implementation-dependence is how much (if any) efficiency
you gain by using it, and how much of that efficiency you get for free
and how much you get only if you make declarations.

Perhaps the last sentence isn't obvious to everyone.  On the LM-2 Lisp
machine, a simple string is faster than a non-simple string for many
operations.  This speed-up happens regardless of declarations; it is a
result of a run-time dispatch to either fast microcode or slow microcode.
On the VAX with a dumb compiler and no tuning, a simple string is only
faster if you make declarations.  On the VAX with a dumb compiler but some
obvious tuning of sequence and string primitives to move type checks out of
inner loops (making multiple copies of the inner loop), simple strings are
faster for these operations, but still slow for AREF unless you make a type
declaration.  On the VAX with a medium-smart compiler that does the same
sort of tuning on user functions, simple strings are faster for user
functions, too, if you only declare (OPTIMIZE SPEED) [assuming that the
compiler prefers space over speed by default, which is the right choice in
most implementations], and save space as well as time if you go whole hog
and make a type declaration.  On the 3600 Lisp machine, you have sort of a
combination of the first case and the last case.

I also support the #* syntax for bit vectors, rather than the #" syntax.
It's probably mere temporal accident that the simple switch proposal
uses #" while the RPG memorial proposal uses #*.

To sum up:

A vector is a 1-dimensional array.  It prints as #(foo bar) or #<array...>
depending on the value of a switch.

A string is a vector of characters.  It always prints as "foo".  Unlike
all other arrays, strings self-evaluate and are compared by EQUAL.

A bit-vector is a vector of bits.  It always prints as #*101.  Since as
far as I can tell these are redundant with integers, perhaps like integers
they should self-evaluate and be compared by EQUAL.  I don't care.

A simple-vector, simple-string, or simple-bit-vector is one of the above
with none of the following MAKE-ARRAY (or MAKE-STRING) options specified:

	:FILL-POINTER
	:ADJUSTABLE-SIZE
	:DISPLACED-TO
	:LEADER-LENGTH, :LEADER-LIST (in implementations that offer them)

There are type names and predicates for the three simple array types.  In
some implementations using the type declaration gets you more efficient
code that only works for that simple type, which is why these are in the
language at all.  There are no user-visible distinctions associated with
simpleness other than those implied by the absence of the above MAKE-ARRAY
options.

∂30-Sep-82  0244	Kent M. Pitman <KMP at MIT-MC> 	Vectors/Arrays    
Date: 30 September 1982 05:42-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject: Vectors/Arrays
To: Common-Lisp at SU-AI

Having pondered the subject for a few days now, I'm pretty convinced by
DLW's and Moon's comments that the "simple" proposal is the best. Moon's
modified "simple" proposal seems fine to me; my vote goes with that.

∂30-Sep-82  0309	Kent M. Pitman <KMP at MIT-MC> 	RESTART 
Date: 30 September 1982 06:08-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject: RESTART
To: Common-Lisp at SU-AI

ALAN's position on RESTART seems valid. It seems to me conceptually
right that a function begins at the time the jump is done to its code.
Any computation which is done after that time should be redone when you
do a restart of the function.

Although seemingly part of the bound variable list, optional and aux args
are really just shorthand for code that is conceptually part of the body.

After all, it would be useful to preserve the property that things like
 (DEFUN F (X &OPTIONAL (Y 3)) (LIST X Y))
can be re-written as
 (DEFUN F (X &REST G0001)
   (LET ((Y (IF (NOT (NULL G0001)) (CAR G0001) 3)))
     (LIST X Y)))
If you RESTART didn't re-execute the &optional inits, then you couldn't
do this re-write without doing a code analysis to verify that either
RESTART wasn't called or that the inits were constant with respect to
the preceding arguments.

Certainly people will expect &AUX things to be redone since they think
 (DEFUN F (X &AUX (Y (...X...)))
is the same as
 (DEFUN F (X) (LET ((Y ...X...)) ...)).
This re-write will also be unsafe without careful code analysis if at
least &aux variables at least are not re-run with a (RESTART F).

∂30-Sep-82  0329	MOON at SCRC-TENEX 	RESTART   
Date: Thursday, 30 September 1982  06:17-EDT
From: MOON at SCRC-TENEX
To:   common-lisp at sail
Subject: RESTART

The recent discussion of the interaction of the RESTART feature with
the implicit block created by DEFUN has convinced me that the RESTART
feature is a crock whose semantics cannot be made both precise and
obvious.  It should be flushed, or at least paired with an explicit
RESTARTABLE marker.

∂30-Sep-82  0921	Glenn S. Burke <GSB at MIT-ML> 	vectors/arrays    
Date: 30 September 1982 12:23-EDT
From: Glenn S. Burke <GSB at MIT-ML>
Subject: vectors/arrays
To: common-lisp at SU-AI

I had been feeling uncomfortable for some time with not being able
to have something with a fill pointer that could be called a STRING,
even though i had been a proponent of keeping the STRING/VECTOR/BITwhatevers
simple.   Moon's proposal sounds good.

∂30-Sep-82  1034	Guy.Steele at CMU-10A 	Clarification    
Date: 30 September 1982 1329-EDT (Thursday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject: Clarification

TAGBODY was not intended to be a RETURN-BODY, only BLOCK.  The proposed
evaluator reflects this, I am sure.
--Guy

∂30-Sep-82  1333	MOON at SCRC-TENEX 	Issue 82 comment, your reply and number crunching 
Date: Thursday, 30 September 1982  15:52-EDT
From: MOON at SCRC-TENEX
To:   Brian G. Milnes <Milnes at CMU-20C at MIT-MC>
Cc:   fahlman at CMU-20C at MIT-MC, steele at CMU-20C at MIT-MC,
      wholey at CMU-20C at MIT-MC, common-lisp at sail,
      rlb at SCRC-TENEX
Subject: Issue 82 comment, your reply and number crunching
In-reply-to: The message of 30 Sep 1982  14:59-EDT from Brian G. Milnes <Milnes at CMU-20C at MIT-MC>

Your point boils down to this, if I understand it:

Different floating point formats may have not only different precisions, but
also different exponent ranges.  You don't care about the precision, but only
want the smallest floating point format that has enough exponent range to hold
the number you're floating.

I see a couple problems with this.  One is that you have to decide somehow
which format is the smallest you use.  If you have an integer that you are
floating which is small enough that it would fit in the tiniest
floating-point format (say it's 1), you don't want FLOAT to use that format
because then the precision would be too small.  FLOAT has to extend upwards,
but not downwards.

The other is that you are assuming automatic conversion to larger formats, with
more exponent range, on overflow or underflow by all floating-point arithmetic
operations.  Just putting it in FLOAT is no good; you might FLOAT a number that
barely fits, then add 1 to it and get an overflow.  I'm not sure whether all
Common Lisp implementations want to commit to this.

I have no vested interest in precision, by the way, and am not a big user
of floating-point numbers, just an implementor.  I have learned enough
about it to be feel concern for trying to keep Common Lisp away from some
known pitfalls.

It sounds like more discussion by more people is called for.

∂30-Sep-82  1333	MOON at SCRC-TENEX 	Issue #97, Colander page 134: floating-point assembly and disassembly 
Date: Thursday, 30 September 1982  05:55-EDT
From: MOON at SCRC-TENEX
To:   Common-Lisp at sail
Subject: Issue #97, Colander page 134: floating-point assembly and disassembly

I am not completely happy with the FLOAT-FRACTION, FLOAT-EXPONENT, and
SCALE-FLOAT functions in the Colander edition.  At the meeting in August I
was assigned to make a proposal.  I am slow.

A minor issue is that the range of FLOAT-FRACTION fails to include zero (of
course it has to), and is inclusive at both ends, which means that there
are two possible return values for some numbers.  I guess that this ugliness
has to stay because some implementations require this freedom for hardware
reasons, and it doesn't make a big difference from a numerical analysis point
of view.  My proposal is to include zero in the range and to add a note about
two possible values for numbers that are an exact power of the base.

A more major issue is that some applications that break down a flonum into
a fraction and an exponent, or assemble a flonum from a fraction and an
exponent, are best served by representing the fraction as a flonum, while
others are best served by representing it as an integer.  An example of
the former is a numerical routine that scales its argument into a certain
range.  An example of the latter is a printing routine that must do exact
integer arithmetic on the fraction.

In the agenda for the August meeting it was also proposed that there be
a function to return the precision of the representation of a given flonum
(presumably in bits); this would be in addition to the "epsilon" constants
described on page 143 of the Colander.

A goal of all this is to make it possible to write portable numeric functions,
such as the trigonometric functions and my debugged version of Steele's
totally accurate floating-point number printer.  These would be portable
to all implementations but perhaps not as efficient as hand-crafted routines
that avoided bignum arithmetic, used special machine instructions, avoided
computing to more precision than the machine really has, etc.

Proposal:

SCALE-FLOAT x e -> y

  y = (* x (expt 2.0 e)) and is a float of the same type as x.
  SCALE-FLOAT is more efficient than exponentiating and multiplying, and
  also cannot overflow or underflow unless the final result (y) cannot
  be represented.

  x is also allowed to be a rational, in which case y is of the default
  type (same as the FLOAT function).

  [x being allowed to be a rational can be removed if anyone objects.  But
   note that this function has to be generic across the different float types
   in any case, so it might as well be generic across all number types.]

UNSCALE-FLOAT y -> x e
  The first value, x, is a float of the same type as y.  The second value, e,
  is an integer such that (= y (* x (expt 2.0 e))).

  The magnitude of x is zero or between 1/b and 1 inclusive, where b is the
  radix of the representation: 2 on most machines, but examples of 8 and
  16, and I think 4, exist.  x has the same sign as y.

  It is an error if y is a rational rather than a float, or if y is an
  infinity.  (Leave infinity out of the Common Lisp manual, though).
  It is not an error if y is zero.

FLOAT-MANTISSA x -> f
FLOAT-EXPONENT x -> e
FLOAT-SIGN x -> s
FLOAT-PRECISION x -> p
  f is a non-negative integer, e is an integer, s is 1 or 0.
  (= x (* (SCALE-FLOAT (FLOAT f x) e) (IF (ZEROP S) 1 -1))) is true.
  It is up to the implementation whether f is the smallest possible integer
  (zeros on the right are removed and e is increased), or f is an integer with
  as many bits as the precision of the representation of x, or perhaps a "few"
  more.  The only thing guaranteed about f is that it is non-negative and
  the above equality is true.

  f is non-negative to avoid problems with minus zero.  s is 1 for minus zero
  even though MINUSP is not true of minus zero (otherwise the FLOAT-SIGN function
  would be redundant).

  p is an integer, the number of bits of precision in x.  This is a constant
  for each flonum representation type (except perhaps for variable-precision
  "bigfloats").

  [I am amenable to converting these four functions into one function that
  returns four values if anyone can come up with a name.  EXPLODE-FLOAT is
  the best so far, and it's not very good, especially since the traditional
  EXPLODE function has been flushed from Common Lisp.  Perhaps DECODE-FLOAT.]

  [I am amenable to adding a function that takes f, e, and s as arguments
   and returns x.  It might be called ENCODE-FLOAT or MAKE-FLOAT.  It ought to
   take either a type argument or an optional fourth argument, the way FLOAT
   takes an optional second argument, which is an example of the type to return.]

FTRUNC x -> fp ip
  The FTRUNC function as it is already defined provides the fraction-part and
  integer-part operations.

These functions exist now in the Lisp machines, with different names and slightly
different semantics in some cases.  They are very easy to write.

Comments?  Suggestions for names?

∂30-Sep-82  1404	Scott E. Fahlman <Fahlman at Cmu-20c> 	Issue 82 comment
Date: Thursday, 30 September 1982  17:04-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Common-Lisp at SU-AI
Subject: Issue 82 comment


It is not necessarily the case that larger-format floats will always
have larger exponents.  It is also not clear to me that quietly rolling
over into the next-larger float is the right thing to do on exponent
overflow, or that such rollover is efficiently implementable on all
machines.  On the Vax, users will often have declared a single type of
float for efficiency, and compiled this in; in such cases, we clearly
want a runtime error rather than a "helpful" coercion.  I think that it
is probably best for the white pages to specify that floating-exponent
overflows always signal an error.

Ideally, the error handler should get enough info so that the user can
supply a quietly-size-expanding handler if he wants to.  This means
that, if possible (can we require this?) it should be a correctable
error, which is passed the operation name and the original args; the
handler can then return the "answer", computed however it likes and in
any format it likes.

-- Scott

∂30-Sep-82  1447	Scott E. Fahlman <Fahlman at Cmu-20c> 	Down with RESTART    
Date: Thursday, 30 September 1982  17:47-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Alan Bawden <ALAN at MIT-MC>
Cc:   Common-Lisp at SU-AI
Subject: Down with RESTART


Alan,

When I said that re-initializing the variables is almost never the right
thing to do, I had several things in mind:

1. In an iterative situation, where the variables are counting or
accumulating something and the problem occurs after several iterations,
you usually want to restart where you left off (if anywhwere), and not
re-init such variables.

2. In the case of optionals, you probably have no way to tell whether
the value came from the user or was computed.  Even if it was computed,
you probably don't want to zap it unless it is a function of whatever it
was that you changed to correct the error.  I would claim that having
things in the arglist depend on one another in this way is relatively
rare compared to constant inits or inits taken from some external
special variable.

3. It bothers me to have RESTART cut in between elements in an arglist.

Anyway, you have made your point: RESTART is confusing and we are better
off without it.  There is no point in putting in a special
RESTARTABLE-BLOCK form -- if I can't take advantage of the implicit
block in a defun, I may as well write a PROG and GO as a
RESTARTABLE-BLOCK and a RESTART.  I can make my code perspicuous by
using RESTART as the tag name, something we have all been doing for
years anyway.

-- Scott

∂30-Sep-82  1535	Kent M. Pitman <KMP at MIT-MC> 	RESTART 
Date: 30 September 1982 18:20-EDT
From: Kent M. Pitman <KMP at MIT-MC>
Subject: RESTART
To: common-lisp at SU-AI

I agree with Moon that RESTARTing DEFUNs is confusing. I was apalled by
Fahlman's intended application. Nevertheless, I have found that certain
people I know would be happy to "take advantage" of the fact that I had
provided them with a PROG or TAGBODY even if I were using it in a structured
way only to go back to the top. I think therefore that having an explicit
RESTARTABLE special form is a good idea and would discourage people from
"malicious" edits that added extra tags. Also, there's something visually
nice about signalling this constrained use of PROG with a special name.
If it's something that we'd expect to happen a lot, we might as well give
it a common name so people can read each others' code rather than have
everyone write the macro themselves and give it a different name.
-kmp

∂30-Sep-82  1553	Scott E. Fahlman <Fahlman at Cmu-20c> 	RESTART    
Date: Thursday, 30 September 1982  18:45-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Kent M. Pitman <KMP at MIT-MC>
Cc:   common-lisp at SU-AI
Subject: RESTART


Well, as long as we're being apalled (or even appalled, for those of you
who are into traditional spelling), I am apalled by the concept of
"malicious edits".  If your system (meaning the combination of code and
social conventions) allows such things, fix the system, not the
programming language.  I would hate to think what Common Lisp would look
like if we were to eliminate every feature that might tempt some idiot
to stick in a line of unclean code.  This is a powerful langauge, and it
cannot possibly be made vandal-proof.  There might be some good
arguments for adding a RESTARTABLE form, but this isn't one.

-- Scott

∂30-Sep-82  1601	Earl A. Killian <EAK at MIT-MC> 	arrays and vectors  (long carefully-thought-out message) 
Date: 30 September 1982 18:57-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject:  arrays and vectors  (long carefully-thought-out message)
To: MOON at SCRC-TENEX
cc: common-lisp at SU-AI

Having :ADJUSTABLE-SIZE is obviously right, regardless of the
rest of the vector/array design.

∂01-Oct-82  0107	Alan Bawden <ALAN at MIT-MC> 	DEFSTRUCT options syntax 
Date: 1 October 1982 04:06-EDT
From: Alan Bawden <ALAN at MIT-MC>
Subject:  DEFSTRUCT options syntax
To: Common-Lisp at SU-AI

From the last meeting:

  17. Can we standardize on keywords always being used as
      name-value pairs?  The worst current deviants are
      WITH-OPEN-FILE and DEFSTRUCT options.  

          Yes.  The Lisp Machine LISP group will make a
          proposal soon for OPEN, WITH-OPEN-FILE, and
          DEFSTRUCT.

While everyone agrees that OPEN and WITH-OPEN-FILE should be fixed (and they
already have been fixed on the Lisp Machine), the case for DEFSTRUCT is not as
clear.  In my opinion the change is gratuitous since the options list in a
defstruct is NOT a function call.  Furthermore it is more than a non-trivial
incompatible change since each option has to be re-thought in light of the fact
that it can now be given only one argument rather than any number.  (Note that
the :CONSTRUCTOR and :INCLUDE options take advantage of this multiple-argument
ability.)

I asked Moon and DLW if they agreed with me on this subject.  Moon replied:

    Date: Thursday, 30 September 1982  00:44-EDT
    From: MOON at SCRC-TENEX
    To:   DLW at SCRC-TENEX
    cc:   Alan
    Re:   gratuitous change to defstruct syntax.

    I've changed my mind about this since August, and am now in agreement with
    Alan.  Partly this was caused by thinking about what it would mean to
    change DEFFLAVOR to use alternating keywords and values; it seems very
    clear that it would make it much worse.  Thinking about this more made me
    decide that some special forms have syntax that looks something like a
    function call, but many are totally unrelated to function calls and trying
    to wedge them into the same mold is just confused (and confusing).
    Certainly the way OPEN used to be was wrong, and fixing it was a big win.
    But I think DEFSTRUCT should stay with "option" or "(option args...)"
    syntax, as should DEFFLAVOR, DEFSYSTEM, DEFSITE, and who knows what else.
    It probably is not a coincidence that these are all "defining" forms.

DLW is also in agreement with me on this.  How about it folks, can we keep
DEFSTRUCT parsing its options the way it is now?

∂01-Oct-82  0546	Scott E. Fahlman <Fahlman at Cmu-20c> 	DEFSTRUCT options syntax  
Date: Friday, 1 October 1982  08:48-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Alan Bawden <ALAN at MIT-MC>
Cc:   Common-Lisp at SU-AI
Subject: DEFSTRUCT options syntax


I guess I could go either way on forcing DEFSTRUCT and friends to be
consistent with the rest of the language.  Consistency is nice, and
important enough that the change is not gratuitous, but should be
abandoned if it really is screwing things up.  "...the hobgoblin of
little minds" and all that.

As a piece of additional food for thought, Gary Brown of DEC has been
working out the details of a red-pages extension to the Vax Common Lisp
that would allow the Lisp user to access the more complex file types
(especially record-oriented ones) in RMS.  The obvious thing to do is to
add a bunch of new options to OPEN and WITH-OPEN-FILE to specify the
additional attributes that RMS needs to know about -- record size,
buffering strategy, etc.  This, too, would look much cleaner if some of
the keywords could take multiple args.  Note that OPEN is really
"DEF-STREAM", sort of -- all the open-ended requirements for specifying
a complicated new object, albeit a volatile one, are present.  There are
other ways to do this for OPEN, I guess, as there are for the other
"defs"; my point is just that OPEN is pretty much like the others.

I'm uneasy about making special forms too special, and creating forms
that the user cannot duplicate with macros or some such.  Perhaps we can
extend the basic keyword mechanism to allow keyword/value, but also
allow calling forms like (foo :key1 value1 (:key2 value2a value2b) ...)
However, coming up with an attractive way to specify this in a macro's
lambda-list will be pretty tough.

-- Scott

∂01-Oct-82  1642	JMC  	setf → set    
To:   common-lisp at SU-AI  
A serious objection to  setf → set  is that Lisp needs an  x  such that
x is to setf as set is to setq.